00:00:00.001 Started by upstream project "autotest-per-patch" build number 124201 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.125 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.126 The recommended git tool is: git 00:00:00.126 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.106 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.116 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.127 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.127 > git config core.sparsecheckout # timeout=10 00:00:06.136 > git read-tree -mu HEAD # timeout=10 00:00:06.160 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.183 Commit message: "pool: fixes for VisualBuild class" 00:00:06.184 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.278 [Pipeline] Start of Pipeline 00:00:06.289 [Pipeline] library 00:00:06.291 Loading library shm_lib@master 00:00:06.291 Library shm_lib@master is cached. Copying from home. 00:00:06.310 [Pipeline] node 00:00:06.320 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.322 [Pipeline] { 00:00:06.333 [Pipeline] catchError 00:00:06.335 [Pipeline] { 00:00:06.350 [Pipeline] wrap 00:00:06.362 [Pipeline] { 00:00:06.371 [Pipeline] stage 00:00:06.373 [Pipeline] { (Prologue) 00:00:06.564 [Pipeline] sh 00:00:06.840 + logger -p user.info -t JENKINS-CI 00:00:06.857 [Pipeline] echo 00:00:06.858 Node: WFP22 00:00:06.865 [Pipeline] sh 00:00:07.152 [Pipeline] setCustomBuildProperty 00:00:07.162 [Pipeline] echo 00:00:07.163 Cleanup processes 00:00:07.168 [Pipeline] sh 00:00:07.448 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.448 1910973 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.460 [Pipeline] sh 00:00:07.740 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.740 ++ grep -v 'sudo pgrep' 00:00:07.740 ++ awk '{print $1}' 00:00:07.740 + sudo kill -9 00:00:07.740 + true 00:00:07.753 [Pipeline] cleanWs 00:00:07.764 [WS-CLEANUP] Deleting project workspace... 00:00:07.764 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.769 [WS-CLEANUP] done 00:00:07.773 [Pipeline] setCustomBuildProperty 00:00:07.786 [Pipeline] sh 00:00:08.066 + sudo git config --global --replace-all safe.directory '*' 00:00:08.144 [Pipeline] nodesByLabel 00:00:08.145 Found a total of 2 nodes with the 'sorcerer' label 00:00:08.156 [Pipeline] httpRequest 00:00:08.161 HttpMethod: GET 00:00:08.162 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.164 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.185 Response Code: HTTP/1.1 200 OK 00:00:08.185 Success: Status code 200 is in the accepted range: 200,404 00:00:08.186 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:17.927 [Pipeline] sh 00:00:18.213 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:18.232 [Pipeline] httpRequest 00:00:18.237 HttpMethod: GET 00:00:18.238 URL: http://10.211.164.101/packages/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:00:18.238 Sending request to url: http://10.211.164.101/packages/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:00:18.258 Response Code: HTTP/1.1 200 OK 00:00:18.259 Success: Status code 200 is in the accepted range: 200,404 00:00:18.259 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:01:07.565 [Pipeline] sh 00:01:07.853 + tar --no-same-owner -xf spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:01:10.406 [Pipeline] sh 00:01:10.692 + git -C spdk log --oneline -n5 00:01:10.692 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:01:10.692 8b1e208cc python/rpc: Python rpc docs generator. 00:01:10.692 98215362c python/rpc: Replace jsonrpc.md with generated docs 00:01:10.692 43217a125 python/rpc: Python rpc call generator. 00:01:10.692 902020273 python/rpc: Replace bdev.py with generated rpc's 00:01:10.705 [Pipeline] } 00:01:10.722 [Pipeline] // stage 00:01:10.732 [Pipeline] stage 00:01:10.734 [Pipeline] { (Prepare) 00:01:10.753 [Pipeline] writeFile 00:01:10.772 [Pipeline] sh 00:01:11.057 + logger -p user.info -t JENKINS-CI 00:01:11.071 [Pipeline] sh 00:01:11.355 + logger -p user.info -t JENKINS-CI 00:01:11.368 [Pipeline] sh 00:01:11.652 + cat autorun-spdk.conf 00:01:11.652 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.652 SPDK_TEST_NVMF=1 00:01:11.652 SPDK_TEST_NVME_CLI=1 00:01:11.652 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.652 SPDK_TEST_NVMF_NICS=e810 00:01:11.652 SPDK_TEST_VFIOUSER=1 00:01:11.652 SPDK_RUN_UBSAN=1 00:01:11.652 NET_TYPE=phy 00:01:11.659 RUN_NIGHTLY=0 00:01:11.663 [Pipeline] readFile 00:01:11.687 [Pipeline] withEnv 00:01:11.689 [Pipeline] { 00:01:11.703 [Pipeline] sh 00:01:11.987 + set -ex 00:01:11.987 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.987 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.987 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.987 ++ SPDK_TEST_NVMF=1 00:01:11.987 ++ SPDK_TEST_NVME_CLI=1 00:01:11.987 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.987 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.987 ++ SPDK_TEST_VFIOUSER=1 00:01:11.987 ++ SPDK_RUN_UBSAN=1 00:01:11.987 ++ NET_TYPE=phy 00:01:11.987 ++ RUN_NIGHTLY=0 00:01:11.987 + case $SPDK_TEST_NVMF_NICS in 00:01:11.987 + DRIVERS=ice 00:01:11.987 + [[ tcp == \r\d\m\a ]] 00:01:11.987 + [[ -n ice ]] 00:01:11.987 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.987 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.987 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.987 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.987 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.987 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.987 + true 00:01:11.987 + for D in $DRIVERS 00:01:11.987 + sudo modprobe ice 00:01:11.987 + exit 0 00:01:11.997 [Pipeline] } 00:01:12.017 [Pipeline] // withEnv 00:01:12.023 [Pipeline] } 00:01:12.041 [Pipeline] // stage 00:01:12.052 [Pipeline] catchError 00:01:12.054 [Pipeline] { 00:01:12.071 [Pipeline] timeout 00:01:12.071 Timeout set to expire in 50 min 00:01:12.073 [Pipeline] { 00:01:12.090 [Pipeline] stage 00:01:12.092 [Pipeline] { (Tests) 00:01:12.109 [Pipeline] sh 00:01:12.396 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.396 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.396 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.396 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.396 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.396 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.396 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.396 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.396 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.396 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.396 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.396 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.396 + source /etc/os-release 00:01:12.396 ++ NAME='Fedora Linux' 00:01:12.396 ++ VERSION='38 (Cloud Edition)' 00:01:12.396 ++ ID=fedora 00:01:12.396 ++ VERSION_ID=38 00:01:12.396 ++ VERSION_CODENAME= 00:01:12.396 ++ PLATFORM_ID=platform:f38 00:01:12.396 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.396 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.396 ++ LOGO=fedora-logo-icon 00:01:12.396 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.396 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.396 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.396 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.396 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.396 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.396 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.396 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.396 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.396 ++ SUPPORT_END=2024-05-14 00:01:12.396 ++ VARIANT='Cloud Edition' 00:01:12.396 ++ VARIANT_ID=cloud 00:01:12.396 + uname -a 00:01:12.396 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.396 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.687 Hugepages 00:01:15.687 node hugesize free / total 00:01:15.687 node0 1048576kB 0 / 0 00:01:15.687 node0 2048kB 0 / 0 00:01:15.687 node1 1048576kB 0 / 0 00:01:15.687 node1 2048kB 0 / 0 00:01:15.687 00:01:15.687 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.687 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.687 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.687 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.687 + rm -f /tmp/spdk-ld-path 00:01:15.687 + source autorun-spdk.conf 00:01:15.687 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.687 ++ SPDK_TEST_NVMF=1 00:01:15.687 ++ SPDK_TEST_NVME_CLI=1 00:01:15.687 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.687 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.687 ++ SPDK_TEST_VFIOUSER=1 00:01:15.687 ++ SPDK_RUN_UBSAN=1 00:01:15.687 ++ NET_TYPE=phy 00:01:15.687 ++ RUN_NIGHTLY=0 00:01:15.687 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.687 + [[ -n '' ]] 00:01:15.687 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.687 + for M in /var/spdk/build-*-manifest.txt 00:01:15.687 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.687 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.687 + for M in /var/spdk/build-*-manifest.txt 00:01:15.687 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.687 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.687 ++ uname 00:01:15.687 + [[ Linux == \L\i\n\u\x ]] 00:01:15.687 + sudo dmesg -T 00:01:15.687 + sudo dmesg --clear 00:01:15.687 + dmesg_pid=1911890 00:01:15.687 + sudo dmesg -Tw 00:01:15.687 + [[ Fedora Linux == FreeBSD ]] 00:01:15.687 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.687 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.687 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.687 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.687 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.687 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.687 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.687 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.687 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.687 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.687 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.687 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.687 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.687 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.687 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.687 Test configuration: 00:01:15.687 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.687 SPDK_TEST_NVMF=1 00:01:15.687 SPDK_TEST_NVME_CLI=1 00:01:15.687 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.687 SPDK_TEST_NVMF_NICS=e810 00:01:15.687 SPDK_TEST_VFIOUSER=1 00:01:15.687 SPDK_RUN_UBSAN=1 00:01:15.687 NET_TYPE=phy 00:01:15.687 RUN_NIGHTLY=0 11:50:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.687 11:50:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.688 11:50:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.688 11:50:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.688 11:50:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.688 11:50:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.688 11:50:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.688 11:50:05 -- paths/export.sh@5 -- $ export PATH 00:01:15.688 11:50:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.688 11:50:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.688 11:50:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:15.688 11:50:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013005.XXXXXX 00:01:15.688 11:50:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013005.iZiNJ8 00:01:15.688 11:50:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:15.688 11:50:05 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:15.688 11:50:05 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.688 11:50:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.688 11:50:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.688 11:50:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:15.688 11:50:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:15.688 11:50:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.688 11:50:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.688 11:50:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:15.688 11:50:05 -- pm/common@17 -- $ local monitor 00:01:15.688 11:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 11:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 11:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 11:50:05 -- pm/common@21 -- $ date +%s 00:01:15.688 11:50:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.688 11:50:05 -- pm/common@21 -- $ date +%s 00:01:15.688 11:50:05 -- pm/common@25 -- $ sleep 1 00:01:15.688 11:50:05 -- pm/common@21 -- $ date +%s 00:01:15.688 11:50:05 -- pm/common@21 -- $ date +%s 00:01:15.688 11:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013005 00:01:15.688 11:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013005 00:01:15.688 11:50:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013005 00:01:15.688 11:50:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718013005 00:01:15.948 Traceback (most recent call last): 00:01:15.948 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:15.948 import spdk.rpc as rpc # noqa 00:01:15.948 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:15.948 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/rpc/__init__.py", line 13, in 00:01:15.948 from . import bdev 00:01:15.948 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/rpc/bdev.py", line 6, in 00:01:15.948 from spdk.rpc.rpc import * 00:01:15.948 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:01:15.948 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013005_collect-vmstat.pm.log 00:01:15.948 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013005_collect-cpu-load.pm.log 00:01:15.948 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013005_collect-cpu-temp.pm.log 00:01:15.948 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718013005_collect-bmc-pm.bmc.pm.log 00:01:16.887 11:50:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:16.887 11:50:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.887 11:50:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.887 11:50:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.887 11:50:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.887 Mon Jun 10 09:50:06 AM UTC 2024 00:01:16.887 11:50:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.887 v24.09-pre-63-g0a5aebcde 00:01:16.887 11:50:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.887 11:50:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.887 11:50:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.887 11:50:06 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:16.887 11:50:06 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:16.887 11:50:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.887 ************************************ 00:01:16.887 START TEST ubsan 00:01:16.887 ************************************ 00:01:16.887 11:50:06 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:16.887 using ubsan 00:01:16.887 00:01:16.887 real 0m0.000s 00:01:16.887 user 0m0.000s 00:01:16.887 sys 0m0.000s 00:01:16.887 11:50:06 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:16.887 11:50:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.887 ************************************ 00:01:16.887 END TEST ubsan 00:01:16.887 ************************************ 00:01:16.887 11:50:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.887 11:50:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.887 11:50:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.887 11:50:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.887 11:50:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.887 11:50:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.887 11:50:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.887 11:50:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.887 11:50:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.146 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.146 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:17.405 Using 'verbs' RDMA provider 00:01:33.213 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:45.422 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:45.422 Creating mk/config.mk...done. 00:01:45.422 Creating mk/cc.flags.mk...done. 00:01:45.422 Type 'make' to build. 00:01:45.422 11:50:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:45.422 11:50:33 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:45.422 11:50:33 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:45.422 11:50:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.422 ************************************ 00:01:45.422 START TEST make 00:01:45.422 ************************************ 00:01:45.422 11:50:33 make -- common/autotest_common.sh@1124 -- $ make -j112 00:01:45.731 The Meson build system 00:01:45.731 Version: 1.3.1 00:01:45.731 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:45.731 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.731 Build type: native build 00:01:45.731 Project name: libvfio-user 00:01:45.731 Project version: 0.0.1 00:01:45.731 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.731 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.731 Host machine cpu family: x86_64 00:01:45.731 Host machine cpu: x86_64 00:01:45.731 Run-time dependency threads found: YES 00:01:45.731 Library dl found: YES 00:01:45.731 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.731 Run-time dependency json-c found: YES 0.17 00:01:45.731 Run-time dependency cmocka found: YES 1.1.7 00:01:45.731 Program pytest-3 found: NO 00:01:45.731 Program flake8 found: NO 00:01:45.731 Program misspell-fixer found: NO 00:01:45.731 Program restructuredtext-lint found: NO 00:01:45.731 Program valgrind found: YES (/usr/bin/valgrind) 00:01:45.731 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.731 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.731 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.731 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.731 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:45.731 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:45.731 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.731 Build targets in project: 8 00:01:45.731 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:45.731 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:45.731 00:01:45.731 libvfio-user 0.0.1 00:01:45.731 00:01:45.731 User defined options 00:01:45.731 buildtype : debug 00:01:45.731 default_library: shared 00:01:45.731 libdir : /usr/local/lib 00:01:45.731 00:01:45.731 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.317 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.317 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:46.317 [2/37] Compiling C object samples/null.p/null.c.o 00:01:46.317 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:46.317 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:46.317 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:46.317 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:46.317 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:46.317 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:46.317 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:46.317 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:46.317 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:46.317 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:46.317 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:46.317 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:46.317 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:46.317 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:46.317 [17/37] Compiling C object samples/server.p/server.c.o 00:01:46.317 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:46.317 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:46.317 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:46.317 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:46.317 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:46.317 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:46.317 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:46.317 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:46.318 [26/37] Compiling C object samples/client.p/client.c.o 00:01:46.318 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:46.318 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:46.318 [29/37] Linking target samples/client 00:01:46.576 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:46.576 [31/37] Linking target test/unit_tests 00:01:46.576 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:46.576 [33/37] Linking target samples/null 00:01:46.576 [34/37] Linking target samples/lspci 00:01:46.576 [35/37] Linking target samples/gpio-pci-idio-16 00:01:46.576 [36/37] Linking target samples/server 00:01:46.576 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:46.576 INFO: autodetecting backend as ninja 00:01:46.576 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.576 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.835 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.835 ninja: no work to do. 00:01:53.415 The Meson build system 00:01:53.415 Version: 1.3.1 00:01:53.415 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:53.415 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:53.415 Build type: native build 00:01:53.415 Program cat found: YES (/usr/bin/cat) 00:01:53.415 Project name: DPDK 00:01:53.415 Project version: 24.03.0 00:01:53.415 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:53.415 C linker for the host machine: cc ld.bfd 2.39-16 00:01:53.415 Host machine cpu family: x86_64 00:01:53.415 Host machine cpu: x86_64 00:01:53.415 Message: ## Building in Developer Mode ## 00:01:53.415 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.415 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.415 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.415 Program python3 found: YES (/usr/bin/python3) 00:01:53.415 Program cat found: YES (/usr/bin/cat) 00:01:53.415 Compiler for C supports arguments -march=native: YES 00:01:53.415 Checking for size of "void *" : 8 00:01:53.415 Checking for size of "void *" : 8 (cached) 00:01:53.415 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:53.415 Library m found: YES 00:01:53.415 Library numa found: YES 00:01:53.415 Has header "numaif.h" : YES 00:01:53.415 Library fdt found: NO 00:01:53.415 Library execinfo found: NO 00:01:53.415 Has header "execinfo.h" : YES 00:01:53.415 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:53.415 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.415 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.415 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.415 Run-time dependency openssl found: YES 3.0.9 00:01:53.415 Run-time dependency libpcap found: YES 1.10.4 00:01:53.415 Has header "pcap.h" with dependency libpcap: YES 00:01:53.415 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.415 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.415 Compiler for C supports arguments -Wformat: YES 00:01:53.415 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.415 Compiler for C supports arguments -Wformat-security: NO 00:01:53.415 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.415 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.415 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.415 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.415 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.415 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.415 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.415 Compiler for C supports arguments -Wundef: YES 00:01:53.415 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.415 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.415 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.415 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.415 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.415 Program objdump found: YES (/usr/bin/objdump) 00:01:53.415 Compiler for C supports arguments -mavx512f: YES 00:01:53.415 Checking if "AVX512 checking" compiles: YES 00:01:53.415 Fetching value of define "__SSE4_2__" : 1 00:01:53.415 Fetching value of define "__AES__" : 1 00:01:53.415 Fetching value of define "__AVX__" : 1 00:01:53.415 Fetching value of define "__AVX2__" : 1 00:01:53.415 Fetching value of define "__AVX512BW__" : 1 00:01:53.415 Fetching value of define "__AVX512CD__" : 1 00:01:53.415 Fetching value of define "__AVX512DQ__" : 1 00:01:53.415 Fetching value of define "__AVX512F__" : 1 00:01:53.415 Fetching value of define "__AVX512VL__" : 1 00:01:53.415 Fetching value of define "__PCLMUL__" : 1 00:01:53.415 Fetching value of define "__RDRND__" : 1 00:01:53.415 Fetching value of define "__RDSEED__" : 1 00:01:53.415 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:53.415 Fetching value of define "__znver1__" : (undefined) 00:01:53.415 Fetching value of define "__znver2__" : (undefined) 00:01:53.415 Fetching value of define "__znver3__" : (undefined) 00:01:53.415 Fetching value of define "__znver4__" : (undefined) 00:01:53.415 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.415 Message: lib/log: Defining dependency "log" 00:01:53.415 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.415 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.415 Checking for function "getentropy" : NO 00:01:53.415 Message: lib/eal: Defining dependency "eal" 00:01:53.415 Message: lib/ring: Defining dependency "ring" 00:01:53.415 Message: lib/rcu: Defining dependency "rcu" 00:01:53.415 Message: lib/mempool: Defining dependency "mempool" 00:01:53.415 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.415 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.415 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.415 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.415 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:53.415 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:53.415 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:53.415 Compiler for C supports arguments -mpclmul: YES 00:01:53.415 Compiler for C supports arguments -maes: YES 00:01:53.415 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.415 Compiler for C supports arguments -mavx512bw: YES 00:01:53.415 Compiler for C supports arguments -mavx512dq: YES 00:01:53.415 Compiler for C supports arguments -mavx512vl: YES 00:01:53.415 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.415 Compiler for C supports arguments -mavx2: YES 00:01:53.415 Compiler for C supports arguments -mavx: YES 00:01:53.415 Message: lib/net: Defining dependency "net" 00:01:53.415 Message: lib/meter: Defining dependency "meter" 00:01:53.415 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.415 Message: lib/pci: Defining dependency "pci" 00:01:53.415 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.415 Message: lib/hash: Defining dependency "hash" 00:01:53.415 Message: lib/timer: Defining dependency "timer" 00:01:53.415 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.415 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.415 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.415 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.415 Message: lib/power: Defining dependency "power" 00:01:53.415 Message: lib/reorder: Defining dependency "reorder" 00:01:53.415 Message: lib/security: Defining dependency "security" 00:01:53.415 Has header "linux/userfaultfd.h" : YES 00:01:53.415 Has header "linux/vduse.h" : YES 00:01:53.415 Message: lib/vhost: Defining dependency "vhost" 00:01:53.415 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.415 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.415 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.415 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.415 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.416 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.416 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.416 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.416 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.416 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.416 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.416 Configuring doxy-api-html.conf using configuration 00:01:53.416 Configuring doxy-api-man.conf using configuration 00:01:53.416 Program mandb found: YES (/usr/bin/mandb) 00:01:53.416 Program sphinx-build found: NO 00:01:53.416 Configuring rte_build_config.h using configuration 00:01:53.416 Message: 00:01:53.416 ================= 00:01:53.416 Applications Enabled 00:01:53.416 ================= 00:01:53.416 00:01:53.416 apps: 00:01:53.416 00:01:53.416 00:01:53.416 Message: 00:01:53.416 ================= 00:01:53.416 Libraries Enabled 00:01:53.416 ================= 00:01:53.416 00:01:53.416 libs: 00:01:53.416 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.416 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.416 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.416 00:01:53.416 Message: 00:01:53.416 =============== 00:01:53.416 Drivers Enabled 00:01:53.416 =============== 00:01:53.416 00:01:53.416 common: 00:01:53.416 00:01:53.416 bus: 00:01:53.416 pci, vdev, 00:01:53.416 mempool: 00:01:53.416 ring, 00:01:53.416 dma: 00:01:53.416 00:01:53.416 net: 00:01:53.416 00:01:53.416 crypto: 00:01:53.416 00:01:53.416 compress: 00:01:53.416 00:01:53.416 vdpa: 00:01:53.416 00:01:53.416 00:01:53.416 Message: 00:01:53.416 ================= 00:01:53.416 Content Skipped 00:01:53.416 ================= 00:01:53.416 00:01:53.416 apps: 00:01:53.416 dumpcap: explicitly disabled via build config 00:01:53.416 graph: explicitly disabled via build config 00:01:53.416 pdump: explicitly disabled via build config 00:01:53.416 proc-info: explicitly disabled via build config 00:01:53.416 test-acl: explicitly disabled via build config 00:01:53.416 test-bbdev: explicitly disabled via build config 00:01:53.416 test-cmdline: explicitly disabled via build config 00:01:53.416 test-compress-perf: explicitly disabled via build config 00:01:53.416 test-crypto-perf: explicitly disabled via build config 00:01:53.416 test-dma-perf: explicitly disabled via build config 00:01:53.416 test-eventdev: explicitly disabled via build config 00:01:53.416 test-fib: explicitly disabled via build config 00:01:53.416 test-flow-perf: explicitly disabled via build config 00:01:53.416 test-gpudev: explicitly disabled via build config 00:01:53.416 test-mldev: explicitly disabled via build config 00:01:53.416 test-pipeline: explicitly disabled via build config 00:01:53.416 test-pmd: explicitly disabled via build config 00:01:53.416 test-regex: explicitly disabled via build config 00:01:53.416 test-sad: explicitly disabled via build config 00:01:53.416 test-security-perf: explicitly disabled via build config 00:01:53.416 00:01:53.416 libs: 00:01:53.416 argparse: explicitly disabled via build config 00:01:53.416 metrics: explicitly disabled via build config 00:01:53.416 acl: explicitly disabled via build config 00:01:53.416 bbdev: explicitly disabled via build config 00:01:53.416 bitratestats: explicitly disabled via build config 00:01:53.416 bpf: explicitly disabled via build config 00:01:53.416 cfgfile: explicitly disabled via build config 00:01:53.416 distributor: explicitly disabled via build config 00:01:53.416 efd: explicitly disabled via build config 00:01:53.416 eventdev: explicitly disabled via build config 00:01:53.416 dispatcher: explicitly disabled via build config 00:01:53.416 gpudev: explicitly disabled via build config 00:01:53.416 gro: explicitly disabled via build config 00:01:53.416 gso: explicitly disabled via build config 00:01:53.416 ip_frag: explicitly disabled via build config 00:01:53.416 jobstats: explicitly disabled via build config 00:01:53.416 latencystats: explicitly disabled via build config 00:01:53.416 lpm: explicitly disabled via build config 00:01:53.416 member: explicitly disabled via build config 00:01:53.416 pcapng: explicitly disabled via build config 00:01:53.416 rawdev: explicitly disabled via build config 00:01:53.416 regexdev: explicitly disabled via build config 00:01:53.416 mldev: explicitly disabled via build config 00:01:53.416 rib: explicitly disabled via build config 00:01:53.416 sched: explicitly disabled via build config 00:01:53.416 stack: explicitly disabled via build config 00:01:53.416 ipsec: explicitly disabled via build config 00:01:53.416 pdcp: explicitly disabled via build config 00:01:53.416 fib: explicitly disabled via build config 00:01:53.416 port: explicitly disabled via build config 00:01:53.416 pdump: explicitly disabled via build config 00:01:53.416 table: explicitly disabled via build config 00:01:53.416 pipeline: explicitly disabled via build config 00:01:53.416 graph: explicitly disabled via build config 00:01:53.416 node: explicitly disabled via build config 00:01:53.416 00:01:53.416 drivers: 00:01:53.416 common/cpt: not in enabled drivers build config 00:01:53.416 common/dpaax: not in enabled drivers build config 00:01:53.416 common/iavf: not in enabled drivers build config 00:01:53.416 common/idpf: not in enabled drivers build config 00:01:53.416 common/ionic: not in enabled drivers build config 00:01:53.416 common/mvep: not in enabled drivers build config 00:01:53.416 common/octeontx: not in enabled drivers build config 00:01:53.416 bus/auxiliary: not in enabled drivers build config 00:01:53.416 bus/cdx: not in enabled drivers build config 00:01:53.416 bus/dpaa: not in enabled drivers build config 00:01:53.416 bus/fslmc: not in enabled drivers build config 00:01:53.416 bus/ifpga: not in enabled drivers build config 00:01:53.416 bus/platform: not in enabled drivers build config 00:01:53.416 bus/uacce: not in enabled drivers build config 00:01:53.416 bus/vmbus: not in enabled drivers build config 00:01:53.416 common/cnxk: not in enabled drivers build config 00:01:53.416 common/mlx5: not in enabled drivers build config 00:01:53.416 common/nfp: not in enabled drivers build config 00:01:53.416 common/nitrox: not in enabled drivers build config 00:01:53.416 common/qat: not in enabled drivers build config 00:01:53.416 common/sfc_efx: not in enabled drivers build config 00:01:53.416 mempool/bucket: not in enabled drivers build config 00:01:53.416 mempool/cnxk: not in enabled drivers build config 00:01:53.416 mempool/dpaa: not in enabled drivers build config 00:01:53.416 mempool/dpaa2: not in enabled drivers build config 00:01:53.416 mempool/octeontx: not in enabled drivers build config 00:01:53.416 mempool/stack: not in enabled drivers build config 00:01:53.416 dma/cnxk: not in enabled drivers build config 00:01:53.416 dma/dpaa: not in enabled drivers build config 00:01:53.416 dma/dpaa2: not in enabled drivers build config 00:01:53.416 dma/hisilicon: not in enabled drivers build config 00:01:53.416 dma/idxd: not in enabled drivers build config 00:01:53.416 dma/ioat: not in enabled drivers build config 00:01:53.416 dma/skeleton: not in enabled drivers build config 00:01:53.416 net/af_packet: not in enabled drivers build config 00:01:53.416 net/af_xdp: not in enabled drivers build config 00:01:53.416 net/ark: not in enabled drivers build config 00:01:53.416 net/atlantic: not in enabled drivers build config 00:01:53.416 net/avp: not in enabled drivers build config 00:01:53.416 net/axgbe: not in enabled drivers build config 00:01:53.416 net/bnx2x: not in enabled drivers build config 00:01:53.416 net/bnxt: not in enabled drivers build config 00:01:53.416 net/bonding: not in enabled drivers build config 00:01:53.416 net/cnxk: not in enabled drivers build config 00:01:53.416 net/cpfl: not in enabled drivers build config 00:01:53.416 net/cxgbe: not in enabled drivers build config 00:01:53.416 net/dpaa: not in enabled drivers build config 00:01:53.416 net/dpaa2: not in enabled drivers build config 00:01:53.416 net/e1000: not in enabled drivers build config 00:01:53.416 net/ena: not in enabled drivers build config 00:01:53.416 net/enetc: not in enabled drivers build config 00:01:53.416 net/enetfec: not in enabled drivers build config 00:01:53.416 net/enic: not in enabled drivers build config 00:01:53.416 net/failsafe: not in enabled drivers build config 00:01:53.416 net/fm10k: not in enabled drivers build config 00:01:53.416 net/gve: not in enabled drivers build config 00:01:53.416 net/hinic: not in enabled drivers build config 00:01:53.416 net/hns3: not in enabled drivers build config 00:01:53.416 net/i40e: not in enabled drivers build config 00:01:53.416 net/iavf: not in enabled drivers build config 00:01:53.416 net/ice: not in enabled drivers build config 00:01:53.416 net/idpf: not in enabled drivers build config 00:01:53.416 net/igc: not in enabled drivers build config 00:01:53.416 net/ionic: not in enabled drivers build config 00:01:53.416 net/ipn3ke: not in enabled drivers build config 00:01:53.416 net/ixgbe: not in enabled drivers build config 00:01:53.416 net/mana: not in enabled drivers build config 00:01:53.416 net/memif: not in enabled drivers build config 00:01:53.416 net/mlx4: not in enabled drivers build config 00:01:53.416 net/mlx5: not in enabled drivers build config 00:01:53.416 net/mvneta: not in enabled drivers build config 00:01:53.416 net/mvpp2: not in enabled drivers build config 00:01:53.416 net/netvsc: not in enabled drivers build config 00:01:53.416 net/nfb: not in enabled drivers build config 00:01:53.416 net/nfp: not in enabled drivers build config 00:01:53.416 net/ngbe: not in enabled drivers build config 00:01:53.416 net/null: not in enabled drivers build config 00:01:53.416 net/octeontx: not in enabled drivers build config 00:01:53.416 net/octeon_ep: not in enabled drivers build config 00:01:53.416 net/pcap: not in enabled drivers build config 00:01:53.416 net/pfe: not in enabled drivers build config 00:01:53.416 net/qede: not in enabled drivers build config 00:01:53.416 net/ring: not in enabled drivers build config 00:01:53.416 net/sfc: not in enabled drivers build config 00:01:53.416 net/softnic: not in enabled drivers build config 00:01:53.416 net/tap: not in enabled drivers build config 00:01:53.416 net/thunderx: not in enabled drivers build config 00:01:53.417 net/txgbe: not in enabled drivers build config 00:01:53.417 net/vdev_netvsc: not in enabled drivers build config 00:01:53.417 net/vhost: not in enabled drivers build config 00:01:53.417 net/virtio: not in enabled drivers build config 00:01:53.417 net/vmxnet3: not in enabled drivers build config 00:01:53.417 raw/*: missing internal dependency, "rawdev" 00:01:53.417 crypto/armv8: not in enabled drivers build config 00:01:53.417 crypto/bcmfs: not in enabled drivers build config 00:01:53.417 crypto/caam_jr: not in enabled drivers build config 00:01:53.417 crypto/ccp: not in enabled drivers build config 00:01:53.417 crypto/cnxk: not in enabled drivers build config 00:01:53.417 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.417 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.417 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.417 crypto/mlx5: not in enabled drivers build config 00:01:53.417 crypto/mvsam: not in enabled drivers build config 00:01:53.417 crypto/nitrox: not in enabled drivers build config 00:01:53.417 crypto/null: not in enabled drivers build config 00:01:53.417 crypto/octeontx: not in enabled drivers build config 00:01:53.417 crypto/openssl: not in enabled drivers build config 00:01:53.417 crypto/scheduler: not in enabled drivers build config 00:01:53.417 crypto/uadk: not in enabled drivers build config 00:01:53.417 crypto/virtio: not in enabled drivers build config 00:01:53.417 compress/isal: not in enabled drivers build config 00:01:53.417 compress/mlx5: not in enabled drivers build config 00:01:53.417 compress/nitrox: not in enabled drivers build config 00:01:53.417 compress/octeontx: not in enabled drivers build config 00:01:53.417 compress/zlib: not in enabled drivers build config 00:01:53.417 regex/*: missing internal dependency, "regexdev" 00:01:53.417 ml/*: missing internal dependency, "mldev" 00:01:53.417 vdpa/ifc: not in enabled drivers build config 00:01:53.417 vdpa/mlx5: not in enabled drivers build config 00:01:53.417 vdpa/nfp: not in enabled drivers build config 00:01:53.417 vdpa/sfc: not in enabled drivers build config 00:01:53.417 event/*: missing internal dependency, "eventdev" 00:01:53.417 baseband/*: missing internal dependency, "bbdev" 00:01:53.417 gpu/*: missing internal dependency, "gpudev" 00:01:53.417 00:01:53.417 00:01:53.417 Build targets in project: 85 00:01:53.417 00:01:53.417 DPDK 24.03.0 00:01:53.417 00:01:53.417 User defined options 00:01:53.417 buildtype : debug 00:01:53.417 default_library : shared 00:01:53.417 libdir : lib 00:01:53.417 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:53.417 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.417 c_link_args : 00:01:53.417 cpu_instruction_set: native 00:01:53.417 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:53.417 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:53.417 enable_docs : false 00:01:53.417 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:53.417 enable_kmods : false 00:01:53.417 tests : false 00:01:53.417 00:01:53.417 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.417 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:53.417 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.417 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.417 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.417 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.417 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.417 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.417 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.417 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.417 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.417 [10/268] Linking static target lib/librte_kvargs.a 00:01:53.417 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.417 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.417 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.417 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.417 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.417 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.417 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.417 [18/268] Linking static target lib/librte_log.a 00:01:53.417 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.417 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.417 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.417 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.417 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.417 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.417 [25/268] Linking static target lib/librte_pci.a 00:01:53.417 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.417 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.417 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.417 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.417 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.417 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.417 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.417 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.417 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.417 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.677 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.677 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.677 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.677 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.677 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.677 [41/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.677 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.677 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.677 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.677 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.677 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.677 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.677 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.677 [49/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.677 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.677 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.677 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.677 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.677 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.677 [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.677 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.677 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.677 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.677 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.677 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.677 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.677 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.677 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.677 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.677 [65/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.677 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.677 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.677 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.677 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.677 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.677 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.677 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.677 [73/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.677 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.677 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.677 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.677 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.677 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.677 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.677 [80/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.677 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.677 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.677 [83/268] Linking static target lib/librte_meter.a 00:01:53.677 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.677 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.677 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.677 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.677 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.677 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.677 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.677 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.677 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.677 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.677 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.677 [95/268] Linking static target lib/librte_telemetry.a 00:01:53.678 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.678 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.678 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.678 [99/268] Linking static target lib/librte_ring.a 00:01:53.678 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.678 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.678 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.678 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.678 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.678 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.678 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.678 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.678 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.678 [109/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.678 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.678 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.678 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.678 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.678 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.678 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.678 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.678 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.678 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.678 [119/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.678 [120/268] Linking static target lib/librte_rcu.a 00:01:53.678 [121/268] Linking static target lib/librte_cmdline.a 00:01:53.678 [122/268] Linking static target lib/librte_net.a 00:01:53.678 [123/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.678 [124/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.678 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.678 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.678 [127/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.678 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.678 [129/268] Linking static target lib/librte_timer.a 00:01:53.678 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.937 [131/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.937 [132/268] Linking static target lib/librte_dmadev.a 00:01:53.937 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.937 [134/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.937 [135/268] Linking static target lib/librte_mempool.a 00:01:53.937 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.937 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.937 [138/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.937 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.937 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.937 [141/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.937 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.937 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.937 [144/268] Linking static target lib/librte_compressdev.a 00:01:53.937 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.937 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.937 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.937 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.937 [149/268] Linking static target lib/librte_eal.a 00:01:53.937 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.937 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.937 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.937 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.937 [154/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.937 [155/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.937 [156/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.937 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.937 [158/268] Linking target lib/librte_log.so.24.1 00:01:53.937 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.937 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.937 [161/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.937 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.937 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.937 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.937 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.937 [166/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.937 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.937 [168/268] Linking static target lib/librte_mbuf.a 00:01:53.937 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.196 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.196 [171/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.196 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.196 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.196 [174/268] Linking static target lib/librte_power.a 00:01:54.196 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.196 [176/268] Linking static target lib/librte_reorder.a 00:01:54.196 [177/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.196 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.196 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.196 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.196 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.196 [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:54.196 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.196 [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.196 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.196 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.196 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.196 [188/268] Linking static target lib/librte_hash.a 00:01:54.196 [189/268] Linking static target lib/librte_security.a 00:01:54.196 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.196 [191/268] Linking target lib/librte_kvargs.so.24.1 00:01:54.196 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.196 [193/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.196 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.197 [195/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.197 [196/268] Linking static target lib/librte_cryptodev.a 00:01:54.197 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.197 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.197 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.197 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.197 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:54.197 [202/268] Linking target lib/librte_telemetry.so.24.1 00:01:54.197 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.197 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.197 [205/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:54.197 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.197 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.197 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.456 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.456 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:54.456 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:54.456 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.456 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:54.456 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.456 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.714 [218/268] Linking static target lib/librte_ethdev.a 00:01:54.714 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [220/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.714 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.973 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.973 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.973 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.233 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.233 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.233 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.801 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:55.801 [229/268] Linking static target lib/librte_vhost.a 00:01:56.369 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.749 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.322 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.854 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.854 [234/268] Linking target lib/librte_eal.so.24.1 00:02:06.854 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.113 [236/268] Linking target lib/librte_timer.so.24.1 00:02:07.113 [237/268] Linking target lib/librte_meter.so.24.1 00:02:07.113 [238/268] Linking target lib/librte_ring.so.24.1 00:02:07.113 [239/268] Linking target lib/librte_pci.so.24.1 00:02:07.113 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:07.113 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.113 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.113 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.113 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.113 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.113 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.113 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:07.113 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:07.113 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.373 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.373 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:07.373 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.373 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:07.632 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.632 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:07.632 [256/268] Linking target lib/librte_net.so.24.1 00:02:07.632 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:07.632 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:07.632 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.632 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:07.632 [261/268] Linking target lib/librte_hash.so.24.1 00:02:07.891 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:07.891 [263/268] Linking target lib/librte_security.so.24.1 00:02:07.891 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:07.891 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.891 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.891 [267/268] Linking target lib/librte_power.so.24.1 00:02:07.891 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:07.891 INFO: autodetecting backend as ninja 00:02:07.891 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:09.287 CC lib/ut_mock/mock.o 00:02:09.287 CC lib/log/log.o 00:02:09.287 CC lib/log/log_flags.o 00:02:09.287 CC lib/log/log_deprecated.o 00:02:09.287 CC lib/ut/ut.o 00:02:09.287 LIB libspdk_ut_mock.a 00:02:09.287 LIB libspdk_log.a 00:02:09.287 SO libspdk_ut_mock.so.6.0 00:02:09.287 LIB libspdk_ut.a 00:02:09.287 SO libspdk_log.so.7.0 00:02:09.287 SO libspdk_ut.so.2.0 00:02:09.287 SYMLINK libspdk_ut_mock.so 00:02:09.287 SYMLINK libspdk_log.so 00:02:09.287 SYMLINK libspdk_ut.so 00:02:09.546 CC lib/dma/dma.o 00:02:09.546 CC lib/ioat/ioat.o 00:02:09.805 CXX lib/trace_parser/trace.o 00:02:09.805 CC lib/util/base64.o 00:02:09.805 CC lib/util/bit_array.o 00:02:09.805 CC lib/util/cpuset.o 00:02:09.805 CC lib/util/crc16.o 00:02:09.805 CC lib/util/crc32.o 00:02:09.805 CC lib/util/crc32c.o 00:02:09.805 CC lib/util/dif.o 00:02:09.805 CC lib/util/crc32_ieee.o 00:02:09.805 CC lib/util/crc64.o 00:02:09.805 CC lib/util/file.o 00:02:09.805 CC lib/util/fd.o 00:02:09.805 CC lib/util/hexlify.o 00:02:09.805 CC lib/util/iov.o 00:02:09.805 CC lib/util/math.o 00:02:09.805 CC lib/util/pipe.o 00:02:09.805 CC lib/util/strerror_tls.o 00:02:09.805 CC lib/util/string.o 00:02:09.805 CC lib/util/uuid.o 00:02:09.805 CC lib/util/fd_group.o 00:02:09.805 CC lib/util/xor.o 00:02:09.805 CC lib/util/zipf.o 00:02:09.805 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.805 CC lib/vfio_user/host/vfio_user.o 00:02:09.805 LIB libspdk_dma.a 00:02:09.805 SO libspdk_dma.so.4.0 00:02:09.805 LIB libspdk_ioat.a 00:02:09.805 SYMLINK libspdk_dma.so 00:02:10.064 SO libspdk_ioat.so.7.0 00:02:10.064 SYMLINK libspdk_ioat.so 00:02:10.064 LIB libspdk_vfio_user.a 00:02:10.064 SO libspdk_vfio_user.so.5.0 00:02:10.064 LIB libspdk_util.a 00:02:10.064 SYMLINK libspdk_vfio_user.so 00:02:10.064 SO libspdk_util.so.9.0 00:02:10.323 SYMLINK libspdk_util.so 00:02:10.323 LIB libspdk_trace_parser.a 00:02:10.323 SO libspdk_trace_parser.so.5.0 00:02:10.582 SYMLINK libspdk_trace_parser.so 00:02:10.582 CC lib/vmd/vmd.o 00:02:10.582 CC lib/vmd/led.o 00:02:10.582 CC lib/json/json_parse.o 00:02:10.582 CC lib/json/json_util.o 00:02:10.582 CC lib/env_dpdk/env.o 00:02:10.582 CC lib/env_dpdk/memory.o 00:02:10.582 CC lib/json/json_write.o 00:02:10.582 CC lib/env_dpdk/pci.o 00:02:10.582 CC lib/env_dpdk/threads.o 00:02:10.582 CC lib/env_dpdk/init.o 00:02:10.582 CC lib/env_dpdk/pci_ioat.o 00:02:10.582 CC lib/env_dpdk/pci_virtio.o 00:02:10.582 CC lib/env_dpdk/pci_vmd.o 00:02:10.582 CC lib/env_dpdk/pci_idxd.o 00:02:10.582 CC lib/env_dpdk/pci_event.o 00:02:10.582 CC lib/env_dpdk/sigbus_handler.o 00:02:10.582 CC lib/env_dpdk/pci_dpdk.o 00:02:10.582 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:10.582 CC lib/conf/conf.o 00:02:10.582 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:10.582 CC lib/rdma/rdma_verbs.o 00:02:10.582 CC lib/rdma/common.o 00:02:10.582 CC lib/idxd/idxd.o 00:02:10.582 CC lib/idxd/idxd_user.o 00:02:10.582 CC lib/idxd/idxd_kernel.o 00:02:10.840 LIB libspdk_conf.a 00:02:10.840 SO libspdk_conf.so.6.0 00:02:10.840 LIB libspdk_json.a 00:02:10.840 LIB libspdk_rdma.a 00:02:10.840 SO libspdk_json.so.6.0 00:02:11.099 SYMLINK libspdk_conf.so 00:02:11.099 SO libspdk_rdma.so.6.0 00:02:11.099 SYMLINK libspdk_json.so 00:02:11.099 SYMLINK libspdk_rdma.so 00:02:11.099 LIB libspdk_idxd.a 00:02:11.099 LIB libspdk_vmd.a 00:02:11.099 SO libspdk_idxd.so.12.0 00:02:11.099 SO libspdk_vmd.so.6.0 00:02:11.358 SYMLINK libspdk_idxd.so 00:02:11.358 SYMLINK libspdk_vmd.so 00:02:11.358 CC lib/jsonrpc/jsonrpc_server.o 00:02:11.358 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:11.358 CC lib/jsonrpc/jsonrpc_client.o 00:02:11.358 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.617 LIB libspdk_jsonrpc.a 00:02:11.617 SO libspdk_jsonrpc.so.6.0 00:02:11.617 LIB libspdk_env_dpdk.a 00:02:11.617 SYMLINK libspdk_jsonrpc.so 00:02:11.617 SO libspdk_env_dpdk.so.14.1 00:02:11.876 SYMLINK libspdk_env_dpdk.so 00:02:12.135 CC lib/rpc/rpc.o 00:02:12.135 LIB libspdk_rpc.a 00:02:12.396 SO libspdk_rpc.so.6.0 00:02:12.396 SYMLINK libspdk_rpc.so 00:02:12.654 CC lib/notify/notify.o 00:02:12.654 CC lib/notify/notify_rpc.o 00:02:12.654 CC lib/keyring/keyring.o 00:02:12.654 CC lib/keyring/keyring_rpc.o 00:02:12.654 CC lib/trace/trace.o 00:02:12.654 CC lib/trace/trace_flags.o 00:02:12.654 CC lib/trace/trace_rpc.o 00:02:12.913 LIB libspdk_notify.a 00:02:12.913 SO libspdk_notify.so.6.0 00:02:12.913 LIB libspdk_keyring.a 00:02:12.913 LIB libspdk_trace.a 00:02:12.913 SYMLINK libspdk_notify.so 00:02:12.913 SO libspdk_keyring.so.1.0 00:02:12.913 SO libspdk_trace.so.10.0 00:02:12.913 SYMLINK libspdk_keyring.so 00:02:12.913 SYMLINK libspdk_trace.so 00:02:13.479 CC lib/thread/thread.o 00:02:13.479 CC lib/thread/iobuf.o 00:02:13.479 CC lib/sock/sock.o 00:02:13.479 CC lib/sock/sock_rpc.o 00:02:13.738 LIB libspdk_sock.a 00:02:13.738 SO libspdk_sock.so.9.0 00:02:13.738 SYMLINK libspdk_sock.so 00:02:14.306 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.306 CC lib/nvme/nvme_ctrlr.o 00:02:14.306 CC lib/nvme/nvme_fabric.o 00:02:14.306 CC lib/nvme/nvme_ns_cmd.o 00:02:14.306 CC lib/nvme/nvme_ns.o 00:02:14.306 CC lib/nvme/nvme_pcie_common.o 00:02:14.306 CC lib/nvme/nvme_pcie.o 00:02:14.306 CC lib/nvme/nvme_qpair.o 00:02:14.306 CC lib/nvme/nvme_transport.o 00:02:14.306 CC lib/nvme/nvme.o 00:02:14.306 CC lib/nvme/nvme_quirks.o 00:02:14.306 CC lib/nvme/nvme_discovery.o 00:02:14.306 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.306 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.306 CC lib/nvme/nvme_tcp.o 00:02:14.306 CC lib/nvme/nvme_poll_group.o 00:02:14.306 CC lib/nvme/nvme_opal.o 00:02:14.306 CC lib/nvme/nvme_io_msg.o 00:02:14.306 CC lib/nvme/nvme_zns.o 00:02:14.306 CC lib/nvme/nvme_stubs.o 00:02:14.306 CC lib/nvme/nvme_auth.o 00:02:14.306 CC lib/nvme/nvme_vfio_user.o 00:02:14.306 CC lib/nvme/nvme_cuse.o 00:02:14.306 CC lib/nvme/nvme_rdma.o 00:02:14.306 LIB libspdk_thread.a 00:02:14.567 SO libspdk_thread.so.10.0 00:02:14.567 SYMLINK libspdk_thread.so 00:02:14.826 CC lib/blob/request.o 00:02:14.826 CC lib/blob/blobstore.o 00:02:14.826 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.826 CC lib/blob/zeroes.o 00:02:14.826 CC lib/blob/blob_bs_dev.o 00:02:14.826 CC lib/virtio/virtio.o 00:02:14.826 CC lib/vfu_tgt/tgt_rpc.o 00:02:14.826 CC lib/accel/accel.o 00:02:14.826 CC lib/virtio/virtio_vhost_user.o 00:02:14.826 CC lib/accel/accel_rpc.o 00:02:14.826 CC lib/init/json_config.o 00:02:14.826 CC lib/init/subsystem.o 00:02:14.826 CC lib/virtio/virtio_vfio_user.o 00:02:14.826 CC lib/accel/accel_sw.o 00:02:14.826 CC lib/init/rpc.o 00:02:14.826 CC lib/virtio/virtio_pci.o 00:02:14.826 CC lib/init/subsystem_rpc.o 00:02:15.085 LIB libspdk_init.a 00:02:15.085 LIB libspdk_virtio.a 00:02:15.085 SO libspdk_init.so.5.0 00:02:15.085 LIB libspdk_vfu_tgt.a 00:02:15.085 SO libspdk_virtio.so.7.0 00:02:15.085 SO libspdk_vfu_tgt.so.3.0 00:02:15.085 SYMLINK libspdk_init.so 00:02:15.344 SYMLINK libspdk_virtio.so 00:02:15.344 SYMLINK libspdk_vfu_tgt.so 00:02:15.605 CC lib/event/app.o 00:02:15.605 CC lib/event/reactor.o 00:02:15.605 CC lib/event/log_rpc.o 00:02:15.605 CC lib/event/app_rpc.o 00:02:15.605 CC lib/event/scheduler_static.o 00:02:15.605 LIB libspdk_accel.a 00:02:15.605 SO libspdk_accel.so.15.0 00:02:15.605 SYMLINK libspdk_accel.so 00:02:15.605 LIB libspdk_nvme.a 00:02:15.880 LIB libspdk_event.a 00:02:15.880 SO libspdk_nvme.so.13.0 00:02:15.880 SO libspdk_event.so.13.1 00:02:15.880 SYMLINK libspdk_event.so 00:02:16.138 CC lib/bdev/bdev.o 00:02:16.138 CC lib/bdev/bdev_rpc.o 00:02:16.138 CC lib/bdev/part.o 00:02:16.138 CC lib/bdev/bdev_zone.o 00:02:16.138 CC lib/bdev/scsi_nvme.o 00:02:16.138 SYMLINK libspdk_nvme.so 00:02:17.175 LIB libspdk_blob.a 00:02:17.175 SO libspdk_blob.so.11.0 00:02:17.175 SYMLINK libspdk_blob.so 00:02:17.445 CC lib/blobfs/blobfs.o 00:02:17.445 CC lib/blobfs/tree.o 00:02:17.445 CC lib/lvol/lvol.o 00:02:17.705 LIB libspdk_bdev.a 00:02:17.964 SO libspdk_bdev.so.15.0 00:02:17.964 SYMLINK libspdk_bdev.so 00:02:17.964 LIB libspdk_blobfs.a 00:02:17.964 SO libspdk_blobfs.so.10.0 00:02:17.964 SYMLINK libspdk_blobfs.so 00:02:17.964 LIB libspdk_lvol.a 00:02:18.222 SO libspdk_lvol.so.10.0 00:02:18.223 SYMLINK libspdk_lvol.so 00:02:18.223 CC lib/scsi/dev.o 00:02:18.223 CC lib/scsi/lun.o 00:02:18.223 CC lib/scsi/port.o 00:02:18.223 CC lib/scsi/scsi.o 00:02:18.223 CC lib/scsi/scsi_bdev.o 00:02:18.223 CC lib/scsi/scsi_rpc.o 00:02:18.223 CC lib/scsi/scsi_pr.o 00:02:18.223 CC lib/scsi/task.o 00:02:18.223 CC lib/ftl/ftl_core.o 00:02:18.223 CC lib/ftl/ftl_init.o 00:02:18.223 CC lib/ftl/ftl_debug.o 00:02:18.223 CC lib/ftl/ftl_layout.o 00:02:18.223 CC lib/ftl/ftl_io.o 00:02:18.223 CC lib/ftl/ftl_sb.o 00:02:18.223 CC lib/nvmf/ctrlr_discovery.o 00:02:18.223 CC lib/ftl/ftl_l2p_flat.o 00:02:18.223 CC lib/ftl/ftl_l2p.o 00:02:18.223 CC lib/nvmf/ctrlr.o 00:02:18.223 CC lib/ftl/ftl_nv_cache.o 00:02:18.223 CC lib/ftl/ftl_band_ops.o 00:02:18.223 CC lib/nvmf/ctrlr_bdev.o 00:02:18.223 CC lib/ftl/ftl_band.o 00:02:18.223 CC lib/nvmf/subsystem.o 00:02:18.223 CC lib/ftl/ftl_writer.o 00:02:18.223 CC lib/nvmf/nvmf.o 00:02:18.223 CC lib/nbd/nbd.o 00:02:18.223 CC lib/ublk/ublk.o 00:02:18.223 CC lib/nbd/nbd_rpc.o 00:02:18.223 CC lib/nvmf/nvmf_rpc.o 00:02:18.223 CC lib/ftl/ftl_rq.o 00:02:18.223 CC lib/ublk/ublk_rpc.o 00:02:18.223 CC lib/ftl/ftl_reloc.o 00:02:18.223 CC lib/ftl/ftl_p2l.o 00:02:18.223 CC lib/nvmf/transport.o 00:02:18.223 CC lib/ftl/ftl_l2p_cache.o 00:02:18.223 CC lib/nvmf/tcp.o 00:02:18.223 CC lib/nvmf/stubs.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.223 CC lib/nvmf/mdns_server.o 00:02:18.223 CC lib/nvmf/vfio_user.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.223 CC lib/nvmf/rdma.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.223 CC lib/nvmf/auth.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.223 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.223 CC lib/ftl/utils/ftl_conf.o 00:02:18.223 CC lib/ftl/utils/ftl_md.o 00:02:18.223 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.223 CC lib/ftl/utils/ftl_mempool.o 00:02:18.223 CC lib/ftl/utils/ftl_property.o 00:02:18.223 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.223 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.223 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.223 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.223 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.223 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:18.223 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.223 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.223 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.223 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.223 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.223 CC lib/ftl/base/ftl_base_dev.o 00:02:18.223 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.223 CC lib/ftl/ftl_trace.o 00:02:18.790 LIB libspdk_nbd.a 00:02:18.790 SO libspdk_nbd.so.7.0 00:02:19.049 SYMLINK libspdk_nbd.so 00:02:19.049 LIB libspdk_scsi.a 00:02:19.049 LIB libspdk_ublk.a 00:02:19.049 SO libspdk_scsi.so.9.0 00:02:19.049 SO libspdk_ublk.so.3.0 00:02:19.049 SYMLINK libspdk_ublk.so 00:02:19.049 SYMLINK libspdk_scsi.so 00:02:19.307 LIB libspdk_ftl.a 00:02:19.307 SO libspdk_ftl.so.9.0 00:02:19.307 CC lib/vhost/vhost.o 00:02:19.307 CC lib/vhost/vhost_scsi.o 00:02:19.307 CC lib/vhost/vhost_blk.o 00:02:19.307 CC lib/vhost/vhost_rpc.o 00:02:19.307 CC lib/iscsi/conn.o 00:02:19.308 CC lib/iscsi/iscsi.o 00:02:19.308 CC lib/iscsi/init_grp.o 00:02:19.308 CC lib/vhost/rte_vhost_user.o 00:02:19.308 CC lib/iscsi/md5.o 00:02:19.308 CC lib/iscsi/param.o 00:02:19.308 CC lib/iscsi/portal_grp.o 00:02:19.308 CC lib/iscsi/tgt_node.o 00:02:19.308 CC lib/iscsi/iscsi_subsystem.o 00:02:19.566 CC lib/iscsi/iscsi_rpc.o 00:02:19.567 CC lib/iscsi/task.o 00:02:19.567 SYMLINK libspdk_ftl.so 00:02:19.826 LIB libspdk_nvmf.a 00:02:20.085 SO libspdk_nvmf.so.18.1 00:02:20.085 SYMLINK libspdk_nvmf.so 00:02:20.085 LIB libspdk_vhost.a 00:02:20.344 SO libspdk_vhost.so.8.0 00:02:20.344 SYMLINK libspdk_vhost.so 00:02:20.344 LIB libspdk_iscsi.a 00:02:20.344 SO libspdk_iscsi.so.8.0 00:02:20.604 SYMLINK libspdk_iscsi.so 00:02:21.173 CC module/vfu_device/vfu_virtio.o 00:02:21.173 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.173 CC module/vfu_device/vfu_virtio_blk.o 00:02:21.173 CC module/vfu_device/vfu_virtio_scsi.o 00:02:21.173 CC module/vfu_device/vfu_virtio_rpc.o 00:02:21.173 CC module/accel/ioat/accel_ioat.o 00:02:21.173 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.173 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.173 CC module/accel/error/accel_error.o 00:02:21.173 CC module/accel/error/accel_error_rpc.o 00:02:21.173 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.173 LIB libspdk_env_dpdk_rpc.a 00:02:21.173 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.173 CC module/sock/posix/posix.o 00:02:21.173 CC module/accel/iaa/accel_iaa.o 00:02:21.173 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.173 CC module/keyring/file/keyring.o 00:02:21.173 CC module/blob/bdev/blob_bdev.o 00:02:21.173 CC module/keyring/file/keyring_rpc.o 00:02:21.173 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.173 CC module/accel/dsa/accel_dsa.o 00:02:21.433 CC module/keyring/linux/keyring.o 00:02:21.433 CC module/keyring/linux/keyring_rpc.o 00:02:21.433 SO libspdk_env_dpdk_rpc.so.6.0 00:02:21.433 SYMLINK libspdk_env_dpdk_rpc.so 00:02:21.433 LIB libspdk_accel_ioat.a 00:02:21.433 LIB libspdk_scheduler_dpdk_governor.a 00:02:21.433 LIB libspdk_scheduler_gscheduler.a 00:02:21.433 LIB libspdk_keyring_file.a 00:02:21.433 LIB libspdk_keyring_linux.a 00:02:21.433 SO libspdk_accel_ioat.so.6.0 00:02:21.433 LIB libspdk_accel_error.a 00:02:21.433 LIB libspdk_scheduler_dynamic.a 00:02:21.433 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:21.433 LIB libspdk_accel_iaa.a 00:02:21.433 SO libspdk_scheduler_gscheduler.so.4.0 00:02:21.433 SO libspdk_keyring_linux.so.1.0 00:02:21.433 SO libspdk_scheduler_dynamic.so.4.0 00:02:21.433 SO libspdk_keyring_file.so.1.0 00:02:21.433 SO libspdk_accel_error.so.2.0 00:02:21.433 LIB libspdk_accel_dsa.a 00:02:21.433 SO libspdk_accel_iaa.so.3.0 00:02:21.433 SYMLINK libspdk_accel_ioat.so 00:02:21.433 LIB libspdk_blob_bdev.a 00:02:21.433 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:21.433 SYMLINK libspdk_keyring_linux.so 00:02:21.433 SYMLINK libspdk_scheduler_dynamic.so 00:02:21.433 SYMLINK libspdk_scheduler_gscheduler.so 00:02:21.433 SO libspdk_accel_dsa.so.5.0 00:02:21.691 SYMLINK libspdk_accel_error.so 00:02:21.691 SYMLINK libspdk_keyring_file.so 00:02:21.691 SYMLINK libspdk_accel_iaa.so 00:02:21.691 SO libspdk_blob_bdev.so.11.0 00:02:21.691 SYMLINK libspdk_accel_dsa.so 00:02:21.691 LIB libspdk_vfu_device.a 00:02:21.691 SYMLINK libspdk_blob_bdev.so 00:02:21.691 SO libspdk_vfu_device.so.3.0 00:02:21.691 SYMLINK libspdk_vfu_device.so 00:02:21.950 LIB libspdk_sock_posix.a 00:02:21.950 SO libspdk_sock_posix.so.6.0 00:02:21.950 SYMLINK libspdk_sock_posix.so 00:02:22.209 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.209 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.209 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.209 CC module/bdev/aio/bdev_aio.o 00:02:22.209 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.209 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.209 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.209 CC module/bdev/delay/vbdev_delay.o 00:02:22.209 CC module/bdev/error/vbdev_error.o 00:02:22.209 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.209 CC module/bdev/malloc/bdev_malloc.o 00:02:22.209 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.209 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.209 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.209 CC module/bdev/gpt/gpt.o 00:02:22.209 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.209 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.209 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.209 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.209 CC module/bdev/raid/bdev_raid.o 00:02:22.209 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.209 CC module/bdev/nvme/bdev_nvme.o 00:02:22.209 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.209 CC module/bdev/raid/raid0.o 00:02:22.209 CC module/bdev/nvme/nvme_rpc.o 00:02:22.209 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.209 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.209 CC module/bdev/raid/raid1.o 00:02:22.209 CC module/bdev/raid/concat.o 00:02:22.209 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.209 CC module/bdev/null/bdev_null.o 00:02:22.209 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.209 CC module/bdev/nvme/vbdev_opal.o 00:02:22.209 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.209 CC module/bdev/null/bdev_null_rpc.o 00:02:22.209 CC module/bdev/ftl/bdev_ftl.o 00:02:22.209 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.209 CC module/bdev/split/vbdev_split.o 00:02:22.209 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.209 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.209 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.209 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.468 LIB libspdk_blobfs_bdev.a 00:02:22.468 SO libspdk_blobfs_bdev.so.6.0 00:02:22.468 LIB libspdk_bdev_split.a 00:02:22.468 LIB libspdk_bdev_error.a 00:02:22.468 SYMLINK libspdk_blobfs_bdev.so 00:02:22.468 LIB libspdk_bdev_null.a 00:02:22.468 LIB libspdk_bdev_passthru.a 00:02:22.468 LIB libspdk_bdev_gpt.a 00:02:22.468 LIB libspdk_bdev_aio.a 00:02:22.468 SO libspdk_bdev_split.so.6.0 00:02:22.468 LIB libspdk_bdev_ftl.a 00:02:22.468 SO libspdk_bdev_error.so.6.0 00:02:22.468 SO libspdk_bdev_gpt.so.6.0 00:02:22.468 LIB libspdk_bdev_zone_block.a 00:02:22.468 LIB libspdk_bdev_delay.a 00:02:22.468 SO libspdk_bdev_null.so.6.0 00:02:22.468 SO libspdk_bdev_passthru.so.6.0 00:02:22.468 LIB libspdk_bdev_iscsi.a 00:02:22.468 SO libspdk_bdev_aio.so.6.0 00:02:22.468 LIB libspdk_bdev_malloc.a 00:02:22.468 SO libspdk_bdev_ftl.so.6.0 00:02:22.468 SO libspdk_bdev_zone_block.so.6.0 00:02:22.468 SYMLINK libspdk_bdev_split.so 00:02:22.468 SO libspdk_bdev_delay.so.6.0 00:02:22.468 SO libspdk_bdev_malloc.so.6.0 00:02:22.468 SYMLINK libspdk_bdev_error.so 00:02:22.468 SO libspdk_bdev_iscsi.so.6.0 00:02:22.468 SYMLINK libspdk_bdev_gpt.so 00:02:22.469 SYMLINK libspdk_bdev_passthru.so 00:02:22.469 SYMLINK libspdk_bdev_null.so 00:02:22.734 SYMLINK libspdk_bdev_aio.so 00:02:22.734 SYMLINK libspdk_bdev_ftl.so 00:02:22.734 SYMLINK libspdk_bdev_zone_block.so 00:02:22.735 LIB libspdk_bdev_lvol.a 00:02:22.735 SYMLINK libspdk_bdev_malloc.so 00:02:22.735 SYMLINK libspdk_bdev_delay.so 00:02:22.735 SYMLINK libspdk_bdev_iscsi.so 00:02:22.735 LIB libspdk_bdev_virtio.a 00:02:22.735 SO libspdk_bdev_lvol.so.6.0 00:02:22.735 SO libspdk_bdev_virtio.so.6.0 00:02:22.735 SYMLINK libspdk_bdev_lvol.so 00:02:22.735 SYMLINK libspdk_bdev_virtio.so 00:02:22.996 LIB libspdk_bdev_raid.a 00:02:22.996 SO libspdk_bdev_raid.so.6.0 00:02:22.996 SYMLINK libspdk_bdev_raid.so 00:02:23.933 LIB libspdk_bdev_nvme.a 00:02:23.933 SO libspdk_bdev_nvme.so.7.0 00:02:23.933 SYMLINK libspdk_bdev_nvme.so 00:02:24.501 CC module/event/subsystems/keyring/keyring.o 00:02:24.760 CC module/event/subsystems/vmd/vmd.o 00:02:24.760 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.760 CC module/event/subsystems/sock/sock.o 00:02:24.761 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:24.761 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.761 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.761 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.761 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.761 LIB libspdk_event_keyring.a 00:02:24.761 LIB libspdk_event_vmd.a 00:02:24.761 SO libspdk_event_keyring.so.1.0 00:02:24.761 LIB libspdk_event_sock.a 00:02:24.761 LIB libspdk_event_vfu_tgt.a 00:02:24.761 LIB libspdk_event_vhost_blk.a 00:02:24.761 LIB libspdk_event_iobuf.a 00:02:24.761 LIB libspdk_event_scheduler.a 00:02:24.761 SO libspdk_event_vmd.so.6.0 00:02:24.761 SO libspdk_event_sock.so.5.0 00:02:24.761 SO libspdk_event_vfu_tgt.so.3.0 00:02:24.761 SO libspdk_event_vhost_blk.so.3.0 00:02:24.761 SYMLINK libspdk_event_keyring.so 00:02:24.761 SO libspdk_event_scheduler.so.4.0 00:02:24.761 SO libspdk_event_iobuf.so.3.0 00:02:25.020 SYMLINK libspdk_event_vmd.so 00:02:25.020 SYMLINK libspdk_event_sock.so 00:02:25.020 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.020 SYMLINK libspdk_event_vhost_blk.so 00:02:25.020 SYMLINK libspdk_event_iobuf.so 00:02:25.020 SYMLINK libspdk_event_scheduler.so 00:02:25.278 CC module/event/subsystems/accel/accel.o 00:02:25.537 LIB libspdk_event_accel.a 00:02:25.537 SO libspdk_event_accel.so.6.0 00:02:25.537 SYMLINK libspdk_event_accel.so 00:02:25.797 CC module/event/subsystems/bdev/bdev.o 00:02:26.056 LIB libspdk_event_bdev.a 00:02:26.056 SO libspdk_event_bdev.so.6.0 00:02:26.056 SYMLINK libspdk_event_bdev.so 00:02:26.625 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.625 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.625 CC module/event/subsystems/ublk/ublk.o 00:02:26.625 CC module/event/subsystems/scsi/scsi.o 00:02:26.625 CC module/event/subsystems/nbd/nbd.o 00:02:26.625 LIB libspdk_event_ublk.a 00:02:26.625 SO libspdk_event_ublk.so.3.0 00:02:26.625 LIB libspdk_event_nbd.a 00:02:26.625 LIB libspdk_event_scsi.a 00:02:26.625 LIB libspdk_event_nvmf.a 00:02:26.625 SO libspdk_event_nbd.so.6.0 00:02:26.625 SYMLINK libspdk_event_ublk.so 00:02:26.625 SO libspdk_event_scsi.so.6.0 00:02:26.885 SO libspdk_event_nvmf.so.6.0 00:02:26.885 SYMLINK libspdk_event_nbd.so 00:02:26.885 SYMLINK libspdk_event_scsi.so 00:02:26.885 SYMLINK libspdk_event_nvmf.so 00:02:27.144 CC module/event/subsystems/iscsi/iscsi.o 00:02:27.144 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:27.402 LIB libspdk_event_iscsi.a 00:02:27.402 LIB libspdk_event_vhost_scsi.a 00:02:27.402 SO libspdk_event_iscsi.so.6.0 00:02:27.402 SO libspdk_event_vhost_scsi.so.3.0 00:02:27.402 SYMLINK libspdk_event_iscsi.so 00:02:27.402 SYMLINK libspdk_event_vhost_scsi.so 00:02:27.662 SO libspdk.so.6.0 00:02:27.662 SYMLINK libspdk.so 00:02:27.921 CC app/spdk_lspci/spdk_lspci.o 00:02:27.921 CXX app/trace/trace.o 00:02:27.921 CC app/spdk_nvme_perf/perf.o 00:02:27.921 CC app/trace_record/trace_record.o 00:02:27.921 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.921 CC app/spdk_nvme_identify/identify.o 00:02:27.921 CC test/rpc_client/rpc_client_test.o 00:02:27.921 CC app/spdk_top/spdk_top.o 00:02:27.921 TEST_HEADER include/spdk/accel.h 00:02:27.921 TEST_HEADER include/spdk/assert.h 00:02:27.921 TEST_HEADER include/spdk/accel_module.h 00:02:27.921 TEST_HEADER include/spdk/barrier.h 00:02:27.921 TEST_HEADER include/spdk/base64.h 00:02:27.921 TEST_HEADER include/spdk/bdev.h 00:02:27.921 TEST_HEADER include/spdk/bdev_module.h 00:02:27.921 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.921 TEST_HEADER include/spdk/bit_array.h 00:02:27.921 TEST_HEADER include/spdk/bit_pool.h 00:02:27.921 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.921 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.921 TEST_HEADER include/spdk/blobfs.h 00:02:27.921 TEST_HEADER include/spdk/blob.h 00:02:27.921 TEST_HEADER include/spdk/conf.h 00:02:27.921 TEST_HEADER include/spdk/config.h 00:02:27.921 TEST_HEADER include/spdk/cpuset.h 00:02:27.921 TEST_HEADER include/spdk/crc16.h 00:02:27.921 TEST_HEADER include/spdk/crc32.h 00:02:27.921 TEST_HEADER include/spdk/crc64.h 00:02:27.921 TEST_HEADER include/spdk/dif.h 00:02:27.921 TEST_HEADER include/spdk/dma.h 00:02:27.921 TEST_HEADER include/spdk/endian.h 00:02:27.921 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.921 TEST_HEADER include/spdk/env.h 00:02:27.921 TEST_HEADER include/spdk/event.h 00:02:27.921 TEST_HEADER include/spdk/fd_group.h 00:02:27.921 TEST_HEADER include/spdk/fd.h 00:02:27.921 TEST_HEADER include/spdk/file.h 00:02:27.921 TEST_HEADER include/spdk/ftl.h 00:02:27.921 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.921 TEST_HEADER include/spdk/hexlify.h 00:02:27.921 TEST_HEADER include/spdk/histogram_data.h 00:02:27.921 TEST_HEADER include/spdk/idxd.h 00:02:28.184 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.184 TEST_HEADER include/spdk/init.h 00:02:28.184 TEST_HEADER include/spdk/ioat.h 00:02:28.184 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.184 CC app/spdk_dd/spdk_dd.o 00:02:28.184 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.184 TEST_HEADER include/spdk/json.h 00:02:28.184 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.184 TEST_HEADER include/spdk/keyring.h 00:02:28.184 CC app/vhost/vhost.o 00:02:28.184 TEST_HEADER include/spdk/keyring_module.h 00:02:28.184 TEST_HEADER include/spdk/log.h 00:02:28.184 TEST_HEADER include/spdk/likely.h 00:02:28.184 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.184 TEST_HEADER include/spdk/memory.h 00:02:28.184 TEST_HEADER include/spdk/lvol.h 00:02:28.184 TEST_HEADER include/spdk/mmio.h 00:02:28.184 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.184 TEST_HEADER include/spdk/nbd.h 00:02:28.184 CC app/nvmf_tgt/nvmf_main.o 00:02:28.184 TEST_HEADER include/spdk/notify.h 00:02:28.184 TEST_HEADER include/spdk/nvme.h 00:02:28.184 TEST_HEADER include/spdk/nvme_intel.h 00:02:28.184 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:28.184 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:28.184 TEST_HEADER include/spdk/nvme_zns.h 00:02:28.184 TEST_HEADER include/spdk/nvme_spec.h 00:02:28.184 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:28.184 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:28.184 TEST_HEADER include/spdk/nvmf.h 00:02:28.184 TEST_HEADER include/spdk/nvmf_spec.h 00:02:28.184 TEST_HEADER include/spdk/nvmf_transport.h 00:02:28.184 TEST_HEADER include/spdk/opal.h 00:02:28.184 TEST_HEADER include/spdk/opal_spec.h 00:02:28.184 TEST_HEADER include/spdk/pci_ids.h 00:02:28.184 TEST_HEADER include/spdk/queue.h 00:02:28.184 TEST_HEADER include/spdk/pipe.h 00:02:28.184 TEST_HEADER include/spdk/reduce.h 00:02:28.184 TEST_HEADER include/spdk/rpc.h 00:02:28.184 TEST_HEADER include/spdk/scsi.h 00:02:28.184 TEST_HEADER include/spdk/scheduler.h 00:02:28.184 TEST_HEADER include/spdk/scsi_spec.h 00:02:28.184 TEST_HEADER include/spdk/sock.h 00:02:28.184 TEST_HEADER include/spdk/stdinc.h 00:02:28.184 TEST_HEADER include/spdk/string.h 00:02:28.184 TEST_HEADER include/spdk/trace.h 00:02:28.184 CC app/spdk_tgt/spdk_tgt.o 00:02:28.184 TEST_HEADER include/spdk/thread.h 00:02:28.184 TEST_HEADER include/spdk/trace_parser.h 00:02:28.184 TEST_HEADER include/spdk/tree.h 00:02:28.184 TEST_HEADER include/spdk/ublk.h 00:02:28.184 TEST_HEADER include/spdk/util.h 00:02:28.184 TEST_HEADER include/spdk/uuid.h 00:02:28.184 TEST_HEADER include/spdk/version.h 00:02:28.184 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:28.184 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:28.184 TEST_HEADER include/spdk/vhost.h 00:02:28.184 TEST_HEADER include/spdk/vmd.h 00:02:28.184 TEST_HEADER include/spdk/xor.h 00:02:28.184 TEST_HEADER include/spdk/zipf.h 00:02:28.184 CXX test/cpp_headers/accel.o 00:02:28.184 CXX test/cpp_headers/accel_module.o 00:02:28.184 CXX test/cpp_headers/assert.o 00:02:28.184 CXX test/cpp_headers/barrier.o 00:02:28.184 CXX test/cpp_headers/base64.o 00:02:28.184 CXX test/cpp_headers/bdev_module.o 00:02:28.184 CXX test/cpp_headers/bdev.o 00:02:28.184 CXX test/cpp_headers/bdev_zone.o 00:02:28.184 CXX test/cpp_headers/bit_pool.o 00:02:28.184 CXX test/cpp_headers/bit_array.o 00:02:28.184 CXX test/cpp_headers/blob_bdev.o 00:02:28.184 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.184 CXX test/cpp_headers/blobfs.o 00:02:28.184 CXX test/cpp_headers/blob.o 00:02:28.184 CXX test/cpp_headers/conf.o 00:02:28.184 CXX test/cpp_headers/cpuset.o 00:02:28.184 CXX test/cpp_headers/config.o 00:02:28.184 CXX test/cpp_headers/crc16.o 00:02:28.184 CXX test/cpp_headers/crc32.o 00:02:28.184 CXX test/cpp_headers/crc64.o 00:02:28.184 CXX test/cpp_headers/dif.o 00:02:28.184 CXX test/cpp_headers/dma.o 00:02:28.184 CXX test/cpp_headers/endian.o 00:02:28.184 CXX test/cpp_headers/env_dpdk.o 00:02:28.184 CXX test/cpp_headers/env.o 00:02:28.184 CXX test/cpp_headers/event.o 00:02:28.184 CXX test/cpp_headers/fd_group.o 00:02:28.184 CXX test/cpp_headers/fd.o 00:02:28.184 CXX test/cpp_headers/file.o 00:02:28.184 CXX test/cpp_headers/ftl.o 00:02:28.184 CXX test/cpp_headers/gpt_spec.o 00:02:28.184 CXX test/cpp_headers/hexlify.o 00:02:28.184 CXX test/cpp_headers/histogram_data.o 00:02:28.184 CXX test/cpp_headers/idxd.o 00:02:28.184 CXX test/cpp_headers/idxd_spec.o 00:02:28.184 CXX test/cpp_headers/init.o 00:02:28.184 CXX test/cpp_headers/ioat.o 00:02:28.184 CC examples/nvme/reconnect/reconnect.o 00:02:28.184 CC examples/ioat/perf/perf.o 00:02:28.184 CC examples/nvme/hello_world/hello_world.o 00:02:28.185 CC examples/vmd/led/led.o 00:02:28.185 CC examples/nvme/hotplug/hotplug.o 00:02:28.185 CXX test/cpp_headers/ioat_spec.o 00:02:28.185 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:28.185 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.185 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.185 CC examples/ioat/verify/verify.o 00:02:28.185 CC examples/nvme/abort/abort.o 00:02:28.185 CC examples/nvme/arbitration/arbitration.o 00:02:28.185 CC examples/accel/perf/accel_perf.o 00:02:28.185 CC examples/sock/hello_world/hello_sock.o 00:02:28.185 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:28.185 CC test/env/memory/memory_ut.o 00:02:28.185 CC test/env/vtophys/vtophys.o 00:02:28.185 CC examples/util/zipf/zipf.o 00:02:28.185 CC app/fio/nvme/fio_plugin.o 00:02:28.185 CC test/nvme/connect_stress/connect_stress.o 00:02:28.185 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:28.185 CC examples/nvmf/nvmf/nvmf.o 00:02:28.185 CC test/nvme/sgl/sgl.o 00:02:28.185 CC test/nvme/startup/startup.o 00:02:28.185 CC test/nvme/aer/aer.o 00:02:28.185 CC test/nvme/fdp/fdp.o 00:02:28.185 CC test/nvme/overhead/overhead.o 00:02:28.455 CC test/nvme/e2edp/nvme_dp.o 00:02:28.455 CC test/env/pci/pci_ut.o 00:02:28.455 CC test/event/reactor/reactor.o 00:02:28.455 CC test/app/jsoncat/jsoncat.o 00:02:28.455 CC test/app/stub/stub.o 00:02:28.455 CC test/nvme/reset/reset.o 00:02:28.455 CC test/thread/poller_perf/poller_perf.o 00:02:28.455 CC test/nvme/err_injection/err_injection.o 00:02:28.455 CC test/event/reactor_perf/reactor_perf.o 00:02:28.455 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.455 CC test/event/event_perf/event_perf.o 00:02:28.455 CC test/app/histogram_perf/histogram_perf.o 00:02:28.455 CC test/nvme/reserve/reserve.o 00:02:28.455 CC examples/idxd/perf/perf.o 00:02:28.455 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.455 CC test/nvme/cuse/cuse.o 00:02:28.455 CC test/nvme/boot_partition/boot_partition.o 00:02:28.455 CC test/nvme/compliance/nvme_compliance.o 00:02:28.455 CC test/nvme/simple_copy/simple_copy.o 00:02:28.455 CC test/event/app_repeat/app_repeat.o 00:02:28.455 CC test/bdev/bdevio/bdevio.o 00:02:28.455 CC test/dma/test_dma/test_dma.o 00:02:28.455 CC app/fio/bdev/fio_plugin.o 00:02:28.455 CC examples/thread/thread/thread_ex.o 00:02:28.455 CC test/blobfs/mkfs/mkfs.o 00:02:28.455 CC test/accel/dif/dif.o 00:02:28.455 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.455 CC examples/blob/cli/blobcli.o 00:02:28.455 CC test/event/scheduler/scheduler.o 00:02:28.455 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.455 CC examples/blob/hello_world/hello_blob.o 00:02:28.455 CC test/app/bdev_svc/bdev_svc.o 00:02:28.455 LINK spdk_lspci 00:02:28.455 LINK rpc_client_test 00:02:28.455 LINK spdk_nvme_discover 00:02:28.722 LINK interrupt_tgt 00:02:28.722 LINK vhost 00:02:28.722 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.722 LINK iscsi_tgt 00:02:28.722 LINK lsvmd 00:02:28.722 LINK zipf 00:02:28.722 LINK led 00:02:28.722 LINK nvmf_tgt 00:02:28.722 LINK pmr_persistence 00:02:28.722 LINK reactor 00:02:28.722 LINK startup 00:02:28.722 CC test/lvol/esnap/esnap.o 00:02:28.722 CXX test/cpp_headers/iscsi_spec.o 00:02:28.722 LINK cmb_copy 00:02:28.722 CXX test/cpp_headers/json.o 00:02:28.722 LINK stub 00:02:28.998 CXX test/cpp_headers/jsonrpc.o 00:02:28.998 CXX test/cpp_headers/keyring.o 00:02:28.998 LINK app_repeat 00:02:28.998 CXX test/cpp_headers/keyring_module.o 00:02:28.998 CXX test/cpp_headers/likely.o 00:02:28.998 LINK spdk_tgt 00:02:28.998 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.998 CXX test/cpp_headers/log.o 00:02:28.998 CXX test/cpp_headers/lvol.o 00:02:28.998 LINK spdk_trace_record 00:02:28.998 CXX test/cpp_headers/memory.o 00:02:28.998 LINK histogram_perf 00:02:28.998 CXX test/cpp_headers/mmio.o 00:02:28.998 LINK ioat_perf 00:02:28.998 LINK vtophys 00:02:28.998 CXX test/cpp_headers/nbd.o 00:02:28.998 CXX test/cpp_headers/notify.o 00:02:28.998 CXX test/cpp_headers/nvme.o 00:02:28.998 LINK verify 00:02:28.998 LINK hotplug 00:02:28.998 LINK reactor_perf 00:02:28.998 CXX test/cpp_headers/nvme_intel.o 00:02:28.998 LINK fused_ordering 00:02:28.998 LINK jsoncat 00:02:28.998 LINK event_perf 00:02:28.998 LINK env_dpdk_post_init 00:02:28.998 LINK hello_sock 00:02:28.998 LINK poller_perf 00:02:28.998 CXX test/cpp_headers/nvme_ocssd.o 00:02:28.998 LINK connect_stress 00:02:28.998 LINK mkfs 00:02:28.999 CXX test/cpp_headers/nvme_spec.o 00:02:28.999 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:28.999 CXX test/cpp_headers/nvme_zns.o 00:02:28.999 CXX test/cpp_headers/nvmf_cmd.o 00:02:28.999 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:28.999 CXX test/cpp_headers/nvmf.o 00:02:28.999 CXX test/cpp_headers/nvmf_spec.o 00:02:28.999 CXX test/cpp_headers/opal.o 00:02:28.999 CXX test/cpp_headers/nvmf_transport.o 00:02:28.999 LINK err_injection 00:02:28.999 CXX test/cpp_headers/opal_spec.o 00:02:28.999 LINK boot_partition 00:02:28.999 CXX test/cpp_headers/pci_ids.o 00:02:28.999 CXX test/cpp_headers/pipe.o 00:02:28.999 CXX test/cpp_headers/queue.o 00:02:28.999 CXX test/cpp_headers/reduce.o 00:02:28.999 CXX test/cpp_headers/rpc.o 00:02:28.999 CXX test/cpp_headers/scheduler.o 00:02:28.999 LINK reset 00:02:28.999 LINK aer 00:02:28.999 LINK overhead 00:02:28.999 CXX test/cpp_headers/scsi_spec.o 00:02:28.999 CXX test/cpp_headers/scsi.o 00:02:28.999 LINK scheduler 00:02:28.999 CXX test/cpp_headers/sock.o 00:02:28.999 LINK doorbell_aers 00:02:28.999 CXX test/cpp_headers/string.o 00:02:28.999 LINK reserve 00:02:28.999 LINK hello_blob 00:02:28.999 CXX test/cpp_headers/stdinc.o 00:02:28.999 CXX test/cpp_headers/thread.o 00:02:28.999 CXX test/cpp_headers/trace.o 00:02:28.999 LINK hello_world 00:02:28.999 LINK nvmf 00:02:28.999 LINK reconnect 00:02:28.999 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.999 LINK thread 00:02:28.999 LINK nvme_compliance 00:02:28.999 CXX test/cpp_headers/trace_parser.o 00:02:28.999 LINK idxd_perf 00:02:28.999 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.999 LINK fdp 00:02:28.999 LINK bdev_svc 00:02:28.999 LINK simple_copy 00:02:28.999 CXX test/cpp_headers/tree.o 00:02:29.259 LINK nvme_dp 00:02:29.259 LINK sgl 00:02:29.259 LINK hello_bdev 00:02:29.259 CXX test/cpp_headers/ublk.o 00:02:29.259 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.259 CXX test/cpp_headers/util.o 00:02:29.259 LINK spdk_dd 00:02:29.259 CXX test/cpp_headers/uuid.o 00:02:29.259 LINK arbitration 00:02:29.259 CXX test/cpp_headers/version.o 00:02:29.259 LINK test_dma 00:02:29.259 LINK bdevio 00:02:29.259 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.259 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.259 CXX test/cpp_headers/vhost.o 00:02:29.259 CXX test/cpp_headers/vmd.o 00:02:29.259 CXX test/cpp_headers/xor.o 00:02:29.259 CXX test/cpp_headers/zipf.o 00:02:29.259 LINK accel_perf 00:02:29.259 LINK spdk_trace 00:02:29.259 LINK abort 00:02:29.259 LINK pci_ut 00:02:29.518 LINK dif 00:02:29.518 LINK spdk_nvme 00:02:29.518 LINK spdk_bdev 00:02:29.518 LINK nvme_manage 00:02:29.518 LINK blobcli 00:02:29.518 LINK nvme_fuzz 00:02:29.777 LINK spdk_nvme_perf 00:02:29.777 LINK mem_callbacks 00:02:29.777 LINK vhost_fuzz 00:02:29.777 LINK bdevperf 00:02:29.777 LINK spdk_nvme_identify 00:02:29.777 LINK spdk_top 00:02:30.036 LINK memory_ut 00:02:30.036 LINK cuse 00:02:30.604 LINK iscsi_fuzz 00:02:32.509 LINK esnap 00:02:32.768 00:02:32.768 real 0m48.775s 00:02:32.768 user 6m38.062s 00:02:32.768 sys 4m26.124s 00:02:32.768 11:51:22 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:32.768 11:51:22 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.768 ************************************ 00:02:32.768 END TEST make 00:02:32.768 ************************************ 00:02:33.027 11:51:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.027 11:51:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.027 11:51:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.027 11:51:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.027 11:51:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.027 11:51:22 -- pm/common@44 -- $ pid=1911925 00:02:33.027 11:51:22 -- pm/common@50 -- $ kill -TERM 1911925 00:02:33.027 11:51:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.027 11:51:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.027 11:51:22 -- pm/common@44 -- $ pid=1911927 00:02:33.027 11:51:22 -- pm/common@50 -- $ kill -TERM 1911927 00:02:33.027 11:51:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.027 11:51:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.027 11:51:22 -- pm/common@44 -- $ pid=1911929 00:02:33.027 11:51:22 -- pm/common@50 -- $ kill -TERM 1911929 00:02:33.027 11:51:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.027 11:51:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.027 11:51:22 -- pm/common@44 -- $ pid=1911952 00:02:33.027 11:51:22 -- pm/common@50 -- $ sudo -E kill -TERM 1911952 00:02:33.027 11:51:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.027 11:51:22 -- nvmf/common.sh@7 -- # uname -s 00:02:33.027 11:51:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.027 11:51:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.027 11:51:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.027 11:51:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.027 11:51:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.027 11:51:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.027 11:51:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.027 11:51:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.027 11:51:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.027 11:51:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.027 11:51:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:33.027 11:51:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:33.027 11:51:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.027 11:51:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.027 11:51:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.027 11:51:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.027 11:51:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.027 11:51:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.027 11:51:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.028 11:51:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.028 11:51:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.028 11:51:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.028 11:51:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.028 11:51:22 -- paths/export.sh@5 -- # export PATH 00:02:33.028 11:51:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.028 11:51:22 -- nvmf/common.sh@47 -- # : 0 00:02:33.028 11:51:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:33.028 11:51:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:33.028 11:51:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.028 11:51:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.028 11:51:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.028 11:51:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:33.028 11:51:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:33.028 11:51:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:33.028 11:51:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.028 11:51:22 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.028 11:51:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.028 11:51:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.028 11:51:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.028 11:51:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.028 11:51:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.028 11:51:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.028 11:51:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.028 11:51:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.028 11:51:22 -- spdk/autotest.sh@48 -- # udevadm_pid=1973054 00:02:33.028 11:51:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:33.028 11:51:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.028 11:51:22 -- pm/common@17 -- # local monitor 00:02:33.028 11:51:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.028 11:51:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.028 11:51:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.028 11:51:22 -- pm/common@21 -- # date +%s 00:02:33.028 11:51:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.028 11:51:22 -- pm/common@21 -- # date +%s 00:02:33.028 11:51:22 -- pm/common@21 -- # date +%s 00:02:33.028 11:51:22 -- pm/common@25 -- # sleep 1 00:02:33.028 11:51:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013082 00:02:33.028 11:51:22 -- pm/common@21 -- # date +%s 00:02:33.028 11:51:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013082 00:02:33.028 11:51:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013082 00:02:33.028 11:51:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718013082 00:02:33.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013082_collect-vmstat.pm.log 00:02:33.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013082_collect-cpu-load.pm.log 00:02:33.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013082_collect-cpu-temp.pm.log 00:02:33.287 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718013082_collect-bmc-pm.bmc.pm.log 00:02:34.224 11:51:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:34.224 11:51:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:34.224 11:51:23 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:34.224 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:02:34.224 11:51:23 -- spdk/autotest.sh@59 -- # create_test_list 00:02:34.224 11:51:23 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:34.224 11:51:23 -- common/autotest_common.sh@10 -- # set +x 00:02:34.224 11:51:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:34.224 11:51:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.224 11:51:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.224 11:51:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:34.224 11:51:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.224 11:51:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:34.224 11:51:23 -- common/autotest_common.sh@1454 -- # uname 00:02:34.224 11:51:23 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:34.224 11:51:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:34.224 11:51:23 -- common/autotest_common.sh@1474 -- # uname 00:02:34.224 11:51:23 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:34.224 11:51:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:34.224 11:51:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:34.224 11:51:23 -- spdk/autotest.sh@72 -- # hash lcov 00:02:34.224 11:51:23 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:34.224 11:51:23 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:34.224 --rc lcov_branch_coverage=1 00:02:34.224 --rc lcov_function_coverage=1 00:02:34.224 --rc genhtml_branch_coverage=1 00:02:34.224 --rc genhtml_function_coverage=1 00:02:34.224 --rc genhtml_legend=1 00:02:34.224 --rc geninfo_all_blocks=1 00:02:34.224 ' 00:02:34.224 11:51:23 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:34.224 --rc lcov_branch_coverage=1 00:02:34.224 --rc lcov_function_coverage=1 00:02:34.224 --rc genhtml_branch_coverage=1 00:02:34.224 --rc genhtml_function_coverage=1 00:02:34.224 --rc genhtml_legend=1 00:02:34.224 --rc geninfo_all_blocks=1 00:02:34.224 ' 00:02:34.224 11:51:23 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:34.224 --rc lcov_branch_coverage=1 00:02:34.224 --rc lcov_function_coverage=1 00:02:34.224 --rc genhtml_branch_coverage=1 00:02:34.224 --rc genhtml_function_coverage=1 00:02:34.224 --rc genhtml_legend=1 00:02:34.224 --rc geninfo_all_blocks=1 00:02:34.224 --no-external' 00:02:34.224 11:51:23 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:34.224 --rc lcov_branch_coverage=1 00:02:34.224 --rc lcov_function_coverage=1 00:02:34.224 --rc genhtml_branch_coverage=1 00:02:34.224 --rc genhtml_function_coverage=1 00:02:34.224 --rc genhtml_legend=1 00:02:34.224 --rc geninfo_all_blocks=1 00:02:34.224 --no-external' 00:02:34.224 11:51:23 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:34.224 lcov: LCOV version 1.14 00:02:34.224 11:51:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:44.205 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:59.152 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:59.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:59.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:59.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:59.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:59.154 11:51:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:59.154 11:51:48 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:59.154 11:51:48 -- common/autotest_common.sh@10 -- # set +x 00:02:59.154 11:51:48 -- spdk/autotest.sh@91 -- # rm -f 00:02:59.154 11:51:48 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.447 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:02.447 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:02.713 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:02.714 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:02.714 11:51:52 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:02.714 11:51:52 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:02.714 11:51:52 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:02.714 11:51:52 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:02.714 11:51:52 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:02.714 11:51:52 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:02.714 11:51:52 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:02.714 11:51:52 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.714 11:51:52 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:02.714 11:51:52 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:02.714 11:51:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:02.714 11:51:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:02.714 11:51:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:02.714 11:51:52 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:02.714 11:51:52 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:02.714 No valid GPT data, bailing 00:03:02.714 11:51:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:02.714 11:51:52 -- scripts/common.sh@391 -- # pt= 00:03:02.714 11:51:52 -- scripts/common.sh@392 -- # return 1 00:03:02.714 11:51:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:02.714 1+0 records in 00:03:02.714 1+0 records out 00:03:02.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177982 s, 589 MB/s 00:03:02.714 11:51:52 -- spdk/autotest.sh@118 -- # sync 00:03:02.714 11:51:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:02.714 11:51:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:02.714 11:51:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:10.837 11:51:58 -- spdk/autotest.sh@124 -- # uname -s 00:03:10.837 11:51:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:10.837 11:51:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.837 11:51:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:10.837 11:51:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:10.837 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:03:10.837 ************************************ 00:03:10.837 START TEST setup.sh 00:03:10.837 ************************************ 00:03:10.837 11:51:58 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.837 * Looking for test storage... 00:03:10.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.837 11:51:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:10.837 11:51:59 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:10.837 11:51:59 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:10.837 11:51:59 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:10.837 11:51:59 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:10.837 11:51:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.837 ************************************ 00:03:10.837 START TEST acl 00:03:10.837 ************************************ 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:10.837 * Looking for test storage... 00:03:10.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.837 11:51:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.837 11:51:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:10.837 11:51:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:10.837 11:51:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:10.837 11:51:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:10.837 11:51:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:10.837 11:51:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:10.837 11:51:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.837 11:51:59 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.375 11:52:02 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:13.375 11:52:02 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:13.375 11:52:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.375 11:52:02 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:13.375 11:52:02 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.375 11:52:02 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:16.666 Hugepages 00:03:16.666 node hugesize free / total 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 00:03:16.666 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:16.666 11:52:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:16.666 11:52:06 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:16.666 11:52:06 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:16.666 11:52:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.666 ************************************ 00:03:16.666 START TEST denied 00:03:16.666 ************************************ 00:03:16.666 11:52:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:16.666 11:52:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:16.666 11:52:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:16.666 11:52:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:16.666 11:52:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.666 11:52:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.860 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.860 11:52:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.067 00:03:25.067 real 0m7.876s 00:03:25.067 user 0m2.434s 00:03:25.067 sys 0m4.763s 00:03:25.067 11:52:14 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:25.067 11:52:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:25.067 ************************************ 00:03:25.067 END TEST denied 00:03:25.067 ************************************ 00:03:25.067 11:52:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:25.067 11:52:14 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:25.067 11:52:14 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:25.067 11:52:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.067 ************************************ 00:03:25.067 START TEST allowed 00:03:25.067 ************************************ 00:03:25.067 11:52:14 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:25.067 11:52:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:25.067 11:52:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:25.067 11:52:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:25.067 11:52:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.067 11:52:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.342 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.342 11:52:19 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:30.342 11:52:19 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:30.342 11:52:19 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:30.342 11:52:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.342 11:52:19 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.632 00:03:33.632 real 0m8.732s 00:03:33.632 user 0m2.523s 00:03:33.632 sys 0m4.788s 00:03:33.632 11:52:22 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:33.632 11:52:22 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:33.632 ************************************ 00:03:33.632 END TEST allowed 00:03:33.632 ************************************ 00:03:33.632 00:03:33.632 real 0m23.872s 00:03:33.632 user 0m7.543s 00:03:33.632 sys 0m14.470s 00:03:33.632 11:52:22 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:33.632 11:52:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.632 ************************************ 00:03:33.632 END TEST acl 00:03:33.632 ************************************ 00:03:33.632 11:52:22 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.632 11:52:22 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:33.632 11:52:22 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:33.632 11:52:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.632 ************************************ 00:03:33.633 START TEST hugepages 00:03:33.633 ************************************ 00:03:33.633 11:52:22 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.633 * Looking for test storage... 00:03:33.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42882692 kB' 'MemAvailable: 46392856 kB' 'Buffers: 2704 kB' 'Cached: 9500212 kB' 'SwapCached: 0 kB' 'Active: 6485244 kB' 'Inactive: 3505224 kB' 'Active(anon): 6096360 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491116 kB' 'Mapped: 211472 kB' 'Shmem: 5608808 kB' 'KReclaimable: 226388 kB' 'Slab: 781440 kB' 'SReclaimable: 226388 kB' 'SUnreclaim: 555052 kB' 'KernelStack: 22080 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 7453004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215856 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.633 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.634 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.635 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.635 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.635 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.635 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.635 11:52:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:33.635 11:52:23 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:33.635 11:52:23 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:33.635 11:52:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.894 ************************************ 00:03:33.894 START TEST default_setup 00:03:33.894 ************************************ 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.894 11:52:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.276 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.276 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.188 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.188 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45091120 kB' 'MemAvailable: 48601240 kB' 'Buffers: 2704 kB' 'Cached: 9500344 kB' 'SwapCached: 0 kB' 'Active: 6500492 kB' 'Inactive: 3505224 kB' 'Active(anon): 6111608 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506112 kB' 'Mapped: 211692 kB' 'Shmem: 5608940 kB' 'KReclaimable: 226300 kB' 'Slab: 779848 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553548 kB' 'KernelStack: 22208 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7466756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216128 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.189 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45088748 kB' 'MemAvailable: 48598868 kB' 'Buffers: 2704 kB' 'Cached: 9500348 kB' 'SwapCached: 0 kB' 'Active: 6500236 kB' 'Inactive: 3505224 kB' 'Active(anon): 6111352 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505708 kB' 'Mapped: 211684 kB' 'Shmem: 5608944 kB' 'KReclaimable: 226300 kB' 'Slab: 779856 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553556 kB' 'KernelStack: 22144 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7466776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.190 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.191 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45085332 kB' 'MemAvailable: 48595452 kB' 'Buffers: 2704 kB' 'Cached: 9500364 kB' 'SwapCached: 0 kB' 'Active: 6499072 kB' 'Inactive: 3505224 kB' 'Active(anon): 6110188 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504496 kB' 'Mapped: 211612 kB' 'Shmem: 5608960 kB' 'KReclaimable: 226300 kB' 'Slab: 779840 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553540 kB' 'KernelStack: 22048 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7466548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216080 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.192 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.193 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.194 nr_hugepages=1024 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.194 resv_hugepages=0 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.194 surplus_hugepages=0 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.194 anon_hugepages=0 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45083520 kB' 'MemAvailable: 48593640 kB' 'Buffers: 2704 kB' 'Cached: 9500388 kB' 'SwapCached: 0 kB' 'Active: 6499572 kB' 'Inactive: 3505224 kB' 'Active(anon): 6110688 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504996 kB' 'Mapped: 211612 kB' 'Shmem: 5608984 kB' 'KReclaimable: 226300 kB' 'Slab: 779872 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553572 kB' 'KernelStack: 22176 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7466820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216096 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.194 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.195 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27037892 kB' 'MemUsed: 5601248 kB' 'SwapCached: 0 kB' 'Active: 2008532 kB' 'Inactive: 164640 kB' 'Active(anon): 1748256 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798608 kB' 'Mapped: 103920 kB' 'AnonPages: 377712 kB' 'Shmem: 1373692 kB' 'KernelStack: 12584 kB' 'PageTables: 6104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 319696 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 225928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.196 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.197 node0=1024 expecting 1024 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.197 00:03:39.197 real 0m5.365s 00:03:39.197 user 0m1.446s 00:03:39.197 sys 0m2.394s 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:39.197 11:52:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:39.197 ************************************ 00:03:39.197 END TEST default_setup 00:03:39.197 ************************************ 00:03:39.197 11:52:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:39.197 11:52:28 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:39.197 11:52:28 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:39.197 11:52:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.197 ************************************ 00:03:39.197 START TEST per_node_1G_alloc 00:03:39.197 ************************************ 00:03:39.197 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:39.197 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:39.197 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:39.197 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.198 11:52:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.486 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.486 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.487 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.487 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.487 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.487 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.487 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45058808 kB' 'MemAvailable: 48568928 kB' 'Buffers: 2704 kB' 'Cached: 9500492 kB' 'SwapCached: 0 kB' 'Active: 6501844 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112960 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506724 kB' 'Mapped: 211640 kB' 'Shmem: 5609088 kB' 'KReclaimable: 226300 kB' 'Slab: 780424 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 554124 kB' 'KernelStack: 22160 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7464992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.487 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.752 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45059060 kB' 'MemAvailable: 48569180 kB' 'Buffers: 2704 kB' 'Cached: 9500496 kB' 'SwapCached: 0 kB' 'Active: 6501000 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112116 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506408 kB' 'Mapped: 211624 kB' 'Shmem: 5609092 kB' 'KReclaimable: 226300 kB' 'Slab: 780396 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 554096 kB' 'KernelStack: 22192 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7465012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216016 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.753 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.754 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45059500 kB' 'MemAvailable: 48569620 kB' 'Buffers: 2704 kB' 'Cached: 9500512 kB' 'SwapCached: 0 kB' 'Active: 6502552 kB' 'Inactive: 3505224 kB' 'Active(anon): 6113668 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507944 kB' 'Mapped: 212128 kB' 'Shmem: 5609108 kB' 'KReclaimable: 226300 kB' 'Slab: 780396 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 554096 kB' 'KernelStack: 22192 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7467180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215984 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.755 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.756 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.757 nr_hugepages=1024 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.757 resv_hugepages=0 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.757 surplus_hugepages=0 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.757 anon_hugepages=0 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45056068 kB' 'MemAvailable: 48566188 kB' 'Buffers: 2704 kB' 'Cached: 9500536 kB' 'SwapCached: 0 kB' 'Active: 6506684 kB' 'Inactive: 3505224 kB' 'Active(anon): 6117800 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512040 kB' 'Mapped: 212476 kB' 'Shmem: 5609132 kB' 'KReclaimable: 226300 kB' 'Slab: 780396 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 554096 kB' 'KernelStack: 22192 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7471176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215988 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.757 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.758 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28078124 kB' 'MemUsed: 4561016 kB' 'SwapCached: 0 kB' 'Active: 2006800 kB' 'Inactive: 164640 kB' 'Active(anon): 1746524 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798628 kB' 'Mapped: 103932 kB' 'AnonPages: 375928 kB' 'Shmem: 1373712 kB' 'KernelStack: 12440 kB' 'PageTables: 5644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 319916 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 226148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.759 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16978060 kB' 'MemUsed: 10678020 kB' 'SwapCached: 0 kB' 'Active: 4499256 kB' 'Inactive: 3340584 kB' 'Active(anon): 4370648 kB' 'Inactive(anon): 0 kB' 'Active(file): 128608 kB' 'Inactive(file): 3340584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7704652 kB' 'Mapped: 108040 kB' 'AnonPages: 135452 kB' 'Shmem: 4235460 kB' 'KernelStack: 9768 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132532 kB' 'Slab: 460480 kB' 'SReclaimable: 132532 kB' 'SUnreclaim: 327948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.760 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.761 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.762 node0=512 expecting 512 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.762 node1=512 expecting 512 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.762 00:03:42.762 real 0m3.555s 00:03:42.762 user 0m1.305s 00:03:42.762 sys 0m2.316s 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:42.762 11:52:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.762 ************************************ 00:03:42.762 END TEST per_node_1G_alloc 00:03:42.762 ************************************ 00:03:42.762 11:52:32 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:42.762 11:52:32 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:42.762 11:52:32 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:42.762 11:52:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.762 ************************************ 00:03:42.762 START TEST even_2G_alloc 00:03:42.762 ************************************ 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.021 11:52:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.315 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.315 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45064864 kB' 'MemAvailable: 48574984 kB' 'Buffers: 2704 kB' 'Cached: 9500656 kB' 'SwapCached: 0 kB' 'Active: 6500868 kB' 'Inactive: 3505224 kB' 'Active(anon): 6111984 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505600 kB' 'Mapped: 210596 kB' 'Shmem: 5609252 kB' 'KReclaimable: 226300 kB' 'Slab: 779484 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553184 kB' 'KernelStack: 22080 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7458024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216000 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.315 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.316 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45065868 kB' 'MemAvailable: 48575988 kB' 'Buffers: 2704 kB' 'Cached: 9500660 kB' 'SwapCached: 0 kB' 'Active: 6500104 kB' 'Inactive: 3505224 kB' 'Active(anon): 6111220 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505324 kB' 'Mapped: 210520 kB' 'Shmem: 5609256 kB' 'KReclaimable: 226300 kB' 'Slab: 779476 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553176 kB' 'KernelStack: 22064 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7458044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215968 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.317 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.318 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45066500 kB' 'MemAvailable: 48576620 kB' 'Buffers: 2704 kB' 'Cached: 9500676 kB' 'SwapCached: 0 kB' 'Active: 6500140 kB' 'Inactive: 3505224 kB' 'Active(anon): 6111256 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505324 kB' 'Mapped: 210520 kB' 'Shmem: 5609272 kB' 'KReclaimable: 226300 kB' 'Slab: 779476 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553176 kB' 'KernelStack: 22064 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7458064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215968 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.319 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.320 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.321 nr_hugepages=1024 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.321 resv_hugepages=0 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.321 surplus_hugepages=0 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.321 anon_hugepages=0 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45066500 kB' 'MemAvailable: 48576620 kB' 'Buffers: 2704 kB' 'Cached: 9500676 kB' 'SwapCached: 0 kB' 'Active: 6500140 kB' 'Inactive: 3505224 kB' 'Active(anon): 6111256 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505324 kB' 'Mapped: 210520 kB' 'Shmem: 5609272 kB' 'KReclaimable: 226300 kB' 'Slab: 779476 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553176 kB' 'KernelStack: 22064 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7458088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215968 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.321 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.322 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28087780 kB' 'MemUsed: 4551360 kB' 'SwapCached: 0 kB' 'Active: 2006192 kB' 'Inactive: 164640 kB' 'Active(anon): 1745916 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798632 kB' 'Mapped: 102956 kB' 'AnonPages: 375372 kB' 'Shmem: 1373716 kB' 'KernelStack: 12456 kB' 'PageTables: 5628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 319140 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 225372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.585 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.586 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16981776 kB' 'MemUsed: 10674304 kB' 'SwapCached: 0 kB' 'Active: 4493848 kB' 'Inactive: 3340584 kB' 'Active(anon): 4365240 kB' 'Inactive(anon): 0 kB' 'Active(file): 128608 kB' 'Inactive(file): 3340584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7704812 kB' 'Mapped: 107564 kB' 'AnonPages: 129756 kB' 'Shmem: 4235620 kB' 'KernelStack: 9592 kB' 'PageTables: 2652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132532 kB' 'Slab: 460336 kB' 'SReclaimable: 132532 kB' 'SUnreclaim: 327804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.587 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.588 node0=512 expecting 512 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:46.588 node1=512 expecting 512 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.588 00:03:46.588 real 0m3.610s 00:03:46.588 user 0m1.332s 00:03:46.588 sys 0m2.338s 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.588 11:52:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.588 ************************************ 00:03:46.588 END TEST even_2G_alloc 00:03:46.588 ************************************ 00:03:46.588 11:52:35 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:46.588 11:52:35 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.588 11:52:35 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.588 11:52:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.588 ************************************ 00:03:46.588 START TEST odd_alloc 00:03:46.588 ************************************ 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.588 11:52:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.883 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.883 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45055004 kB' 'MemAvailable: 48565124 kB' 'Buffers: 2704 kB' 'Cached: 9500816 kB' 'SwapCached: 0 kB' 'Active: 6501372 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112488 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505904 kB' 'Mapped: 210648 kB' 'Shmem: 5609412 kB' 'KReclaimable: 226300 kB' 'Slab: 779784 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553484 kB' 'KernelStack: 22192 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 7461680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216288 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.148 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.149 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45058460 kB' 'MemAvailable: 48568580 kB' 'Buffers: 2704 kB' 'Cached: 9500820 kB' 'SwapCached: 0 kB' 'Active: 6501984 kB' 'Inactive: 3505224 kB' 'Active(anon): 6113100 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506560 kB' 'Mapped: 210604 kB' 'Shmem: 5609416 kB' 'KReclaimable: 226300 kB' 'Slab: 779604 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553304 kB' 'KernelStack: 22160 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 7461700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216240 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.150 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.151 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45059660 kB' 'MemAvailable: 48569780 kB' 'Buffers: 2704 kB' 'Cached: 9500836 kB' 'SwapCached: 0 kB' 'Active: 6501364 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112480 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506288 kB' 'Mapped: 210528 kB' 'Shmem: 5609432 kB' 'KReclaimable: 226300 kB' 'Slab: 779564 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553264 kB' 'KernelStack: 22160 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 7474508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216256 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.152 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:50.153 nr_hugepages=1025 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.153 resv_hugepages=0 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.153 surplus_hugepages=0 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.153 anon_hugepages=0 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.153 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45057220 kB' 'MemAvailable: 48567340 kB' 'Buffers: 2704 kB' 'Cached: 9500856 kB' 'SwapCached: 0 kB' 'Active: 6501196 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112312 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506096 kB' 'Mapped: 210528 kB' 'Shmem: 5609452 kB' 'KReclaimable: 226300 kB' 'Slab: 779564 kB' 'SReclaimable: 226300 kB' 'SUnreclaim: 553264 kB' 'KernelStack: 22208 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 7461500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216224 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.154 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28075732 kB' 'MemUsed: 4563408 kB' 'SwapCached: 0 kB' 'Active: 2007988 kB' 'Inactive: 164640 kB' 'Active(anon): 1747712 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798644 kB' 'Mapped: 102964 kB' 'AnonPages: 377104 kB' 'Shmem: 1373728 kB' 'KernelStack: 12440 kB' 'PageTables: 5560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 319208 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 225440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.155 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.156 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16979908 kB' 'MemUsed: 10676172 kB' 'SwapCached: 0 kB' 'Active: 4493576 kB' 'Inactive: 3340584 kB' 'Active(anon): 4364968 kB' 'Inactive(anon): 0 kB' 'Active(file): 128608 kB' 'Inactive(file): 3340584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7704944 kB' 'Mapped: 107564 kB' 'AnonPages: 129340 kB' 'Shmem: 4235752 kB' 'KernelStack: 9768 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132532 kB' 'Slab: 460356 kB' 'SReclaimable: 132532 kB' 'SUnreclaim: 327824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.157 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:50.158 node0=512 expecting 513 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:50.158 node1=513 expecting 512 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:50.158 00:03:50.158 real 0m3.688s 00:03:50.158 user 0m1.432s 00:03:50.158 sys 0m2.325s 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:50.158 11:52:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.158 ************************************ 00:03:50.158 END TEST odd_alloc 00:03:50.158 ************************************ 00:03:50.419 11:52:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:50.419 11:52:39 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:50.419 11:52:39 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:50.419 11:52:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.419 ************************************ 00:03:50.419 START TEST custom_alloc 00:03:50.419 ************************************ 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.419 11:52:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.719 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.719 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 44046552 kB' 'MemAvailable: 47556668 kB' 'Buffers: 2704 kB' 'Cached: 9500980 kB' 'SwapCached: 0 kB' 'Active: 6503144 kB' 'Inactive: 3505224 kB' 'Active(anon): 6114260 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507600 kB' 'Mapped: 210664 kB' 'Shmem: 5609576 kB' 'KReclaimable: 226292 kB' 'Slab: 780452 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554160 kB' 'KernelStack: 22080 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 7459172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216112 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.719 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.720 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 44047496 kB' 'MemAvailable: 47557612 kB' 'Buffers: 2704 kB' 'Cached: 9500984 kB' 'SwapCached: 0 kB' 'Active: 6501584 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112700 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506356 kB' 'Mapped: 210540 kB' 'Shmem: 5609580 kB' 'KReclaimable: 226292 kB' 'Slab: 780420 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554128 kB' 'KernelStack: 22016 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 7459324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.721 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.722 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 44047872 kB' 'MemAvailable: 47557988 kB' 'Buffers: 2704 kB' 'Cached: 9501000 kB' 'SwapCached: 0 kB' 'Active: 6501612 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112728 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506356 kB' 'Mapped: 210540 kB' 'Shmem: 5609596 kB' 'KReclaimable: 226292 kB' 'Slab: 780420 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554128 kB' 'KernelStack: 22016 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 7459348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.723 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.724 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:53.725 nr_hugepages=1536 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.725 resv_hugepages=0 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.725 surplus_hugepages=0 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.725 anon_hugepages=0 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 44047872 kB' 'MemAvailable: 47557988 kB' 'Buffers: 2704 kB' 'Cached: 9501020 kB' 'SwapCached: 0 kB' 'Active: 6501224 kB' 'Inactive: 3505224 kB' 'Active(anon): 6112340 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505916 kB' 'Mapped: 210540 kB' 'Shmem: 5609616 kB' 'KReclaimable: 226292 kB' 'Slab: 780420 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554128 kB' 'KernelStack: 22000 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 7459500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.725 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.726 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28097672 kB' 'MemUsed: 4541468 kB' 'SwapCached: 0 kB' 'Active: 2007068 kB' 'Inactive: 164640 kB' 'Active(anon): 1746792 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798676 kB' 'Mapped: 102976 kB' 'AnonPages: 376172 kB' 'Shmem: 1373760 kB' 'KernelStack: 12424 kB' 'PageTables: 5540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 319984 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 226216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.727 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.728 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15950428 kB' 'MemUsed: 11705652 kB' 'SwapCached: 0 kB' 'Active: 4495072 kB' 'Inactive: 3340584 kB' 'Active(anon): 4366464 kB' 'Inactive(anon): 0 kB' 'Active(file): 128608 kB' 'Inactive(file): 3340584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7705104 kB' 'Mapped: 107564 kB' 'AnonPages: 130644 kB' 'Shmem: 4235912 kB' 'KernelStack: 9640 kB' 'PageTables: 2748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132524 kB' 'Slab: 460436 kB' 'SReclaimable: 132524 kB' 'SUnreclaim: 327912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.729 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.730 node0=512 expecting 512 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:53.730 node1=1024 expecting 1024 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:53.730 00:03:53.730 real 0m3.394s 00:03:53.730 user 0m1.230s 00:03:53.730 sys 0m2.194s 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:53.730 11:52:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.730 ************************************ 00:03:53.730 END TEST custom_alloc 00:03:53.730 ************************************ 00:03:53.730 11:52:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:53.730 11:52:43 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:53.730 11:52:43 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:53.730 11:52:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.730 ************************************ 00:03:53.730 START TEST no_shrink_alloc 00:03:53.730 ************************************ 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.730 11:52:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.026 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.026 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45081984 kB' 'MemAvailable: 48592100 kB' 'Buffers: 2704 kB' 'Cached: 9501148 kB' 'SwapCached: 0 kB' 'Active: 6504228 kB' 'Inactive: 3505224 kB' 'Active(anon): 6115344 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508576 kB' 'Mapped: 210872 kB' 'Shmem: 5609744 kB' 'KReclaimable: 226292 kB' 'Slab: 780404 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554112 kB' 'KernelStack: 22096 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7463296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216112 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.026 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.027 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45081628 kB' 'MemAvailable: 48591744 kB' 'Buffers: 2704 kB' 'Cached: 9501148 kB' 'SwapCached: 0 kB' 'Active: 6503612 kB' 'Inactive: 3505224 kB' 'Active(anon): 6114728 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507892 kB' 'Mapped: 210628 kB' 'Shmem: 5609744 kB' 'KReclaimable: 226292 kB' 'Slab: 780380 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554088 kB' 'KernelStack: 22048 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7460492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.028 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.029 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45083176 kB' 'MemAvailable: 48593292 kB' 'Buffers: 2704 kB' 'Cached: 9501168 kB' 'SwapCached: 0 kB' 'Active: 6502704 kB' 'Inactive: 3505224 kB' 'Active(anon): 6113820 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507340 kB' 'Mapped: 210552 kB' 'Shmem: 5609764 kB' 'KReclaimable: 226292 kB' 'Slab: 780380 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554088 kB' 'KernelStack: 22048 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7460516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.030 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.031 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.032 nr_hugepages=1024 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.032 resv_hugepages=0 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.032 surplus_hugepages=0 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.032 anon_hugepages=0 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45086764 kB' 'MemAvailable: 48596880 kB' 'Buffers: 2704 kB' 'Cached: 9501208 kB' 'SwapCached: 0 kB' 'Active: 6503624 kB' 'Inactive: 3505224 kB' 'Active(anon): 6114740 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508264 kB' 'Mapped: 210552 kB' 'Shmem: 5609804 kB' 'KReclaimable: 226292 kB' 'Slab: 780380 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554088 kB' 'KernelStack: 22032 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7476096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.032 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.033 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27032528 kB' 'MemUsed: 5606612 kB' 'SwapCached: 0 kB' 'Active: 2007096 kB' 'Inactive: 164640 kB' 'Active(anon): 1746820 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798680 kB' 'Mapped: 102988 kB' 'AnonPages: 376204 kB' 'Shmem: 1373764 kB' 'KernelStack: 12392 kB' 'PageTables: 5580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 319988 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 226220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.034 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.035 node0=1024 expecting 1024 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:57.035 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.036 11:52:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.320 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.320 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.321 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.321 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.321 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.321 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.321 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.321 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.321 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45112068 kB' 'MemAvailable: 48622184 kB' 'Buffers: 2704 kB' 'Cached: 9501284 kB' 'SwapCached: 0 kB' 'Active: 6503420 kB' 'Inactive: 3505224 kB' 'Active(anon): 6114536 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507928 kB' 'Mapped: 210588 kB' 'Shmem: 5609880 kB' 'KReclaimable: 226292 kB' 'Slab: 780312 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554020 kB' 'KernelStack: 22048 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7462492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216096 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.584 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.585 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45113912 kB' 'MemAvailable: 48624028 kB' 'Buffers: 2704 kB' 'Cached: 9501288 kB' 'SwapCached: 0 kB' 'Active: 6503544 kB' 'Inactive: 3505224 kB' 'Active(anon): 6114660 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508104 kB' 'Mapped: 210568 kB' 'Shmem: 5609884 kB' 'KReclaimable: 226292 kB' 'Slab: 780328 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554036 kB' 'KernelStack: 21968 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7462740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216112 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.586 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.587 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45115604 kB' 'MemAvailable: 48625720 kB' 'Buffers: 2704 kB' 'Cached: 9501308 kB' 'SwapCached: 0 kB' 'Active: 6503668 kB' 'Inactive: 3505224 kB' 'Active(anon): 6114784 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507576 kB' 'Mapped: 210568 kB' 'Shmem: 5609904 kB' 'KReclaimable: 226292 kB' 'Slab: 780328 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554036 kB' 'KernelStack: 22000 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7462668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216048 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.588 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.589 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.590 nr_hugepages=1024 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.590 resv_hugepages=0 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.590 surplus_hugepages=0 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.590 anon_hugepages=0 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 45116348 kB' 'MemAvailable: 48626464 kB' 'Buffers: 2704 kB' 'Cached: 9501320 kB' 'SwapCached: 0 kB' 'Active: 6504524 kB' 'Inactive: 3505224 kB' 'Active(anon): 6115640 kB' 'Inactive(anon): 0 kB' 'Active(file): 388884 kB' 'Inactive(file): 3505224 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508952 kB' 'Mapped: 210560 kB' 'Shmem: 5609916 kB' 'KReclaimable: 226292 kB' 'Slab: 780328 kB' 'SReclaimable: 226292 kB' 'SUnreclaim: 554036 kB' 'KernelStack: 22192 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 7464664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216160 kB' 'VmallocChunk: 0 kB' 'Percpu: 78848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2594164 kB' 'DirectMap2M: 12820480 kB' 'DirectMap1G: 53477376 kB' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.590 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.591 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.592 11:52:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27042664 kB' 'MemUsed: 5596476 kB' 'SwapCached: 0 kB' 'Active: 2009660 kB' 'Inactive: 164640 kB' 'Active(anon): 1749384 kB' 'Inactive(anon): 0 kB' 'Active(file): 260276 kB' 'Inactive(file): 164640 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1798704 kB' 'Mapped: 102996 kB' 'AnonPages: 378844 kB' 'Shmem: 1373788 kB' 'KernelStack: 12568 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93768 kB' 'Slab: 320020 kB' 'SReclaimable: 93768 kB' 'SUnreclaim: 226252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.592 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.593 node0=1024 expecting 1024 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.593 00:04:00.593 real 0m6.831s 00:04:00.593 user 0m2.489s 00:04:00.593 sys 0m4.412s 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:00.593 11:52:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.593 ************************************ 00:04:00.593 END TEST no_shrink_alloc 00:04:00.594 ************************************ 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:00.594 11:52:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:00.594 00:04:00.594 real 0m27.105s 00:04:00.594 user 0m9.494s 00:04:00.594 sys 0m16.430s 00:04:00.594 11:52:50 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:00.594 11:52:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.594 ************************************ 00:04:00.594 END TEST hugepages 00:04:00.594 ************************************ 00:04:00.852 11:52:50 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:00.852 11:52:50 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:00.852 11:52:50 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:00.852 11:52:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.852 ************************************ 00:04:00.852 START TEST driver 00:04:00.852 ************************************ 00:04:00.852 11:52:50 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:00.852 * Looking for test storage... 00:04:00.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.852 11:52:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:00.852 11:52:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.852 11:52:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.130 11:52:54 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:06.130 11:52:54 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:06.130 11:52:54 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:06.130 11:52:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.130 ************************************ 00:04:06.130 START TEST guess_driver 00:04:06.130 ************************************ 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:06.130 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:06.131 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:06.131 11:52:54 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:06.131 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:06.131 Looking for driver=vfio-pci 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.131 11:52:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.421 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.422 11:52:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.801 11:52:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.801 11:52:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.801 11:52:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.801 11:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.801 11:53:00 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:10.801 11:53:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.801 11:53:00 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.076 00:04:16.076 real 0m9.764s 00:04:16.076 user 0m2.471s 00:04:16.076 sys 0m4.995s 00:04:16.076 11:53:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:16.076 11:53:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.076 ************************************ 00:04:16.076 END TEST guess_driver 00:04:16.076 ************************************ 00:04:16.076 00:04:16.076 real 0m14.621s 00:04:16.076 user 0m3.788s 00:04:16.076 sys 0m7.745s 00:04:16.076 11:53:04 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:16.076 11:53:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.076 ************************************ 00:04:16.076 END TEST driver 00:04:16.076 ************************************ 00:04:16.076 11:53:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:16.076 11:53:04 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:16.076 11:53:04 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:16.076 11:53:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.076 ************************************ 00:04:16.076 START TEST devices 00:04:16.076 ************************************ 00:04:16.076 11:53:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:16.076 * Looking for test storage... 00:04:16.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:16.076 11:53:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:16.076 11:53:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:16.076 11:53:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.076 11:53:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:19.385 11:53:08 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.385 No valid GPT data, bailing 00:04:19.385 11:53:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.385 11:53:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.385 11:53:08 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.385 11:53:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:19.385 11:53:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.385 ************************************ 00:04:19.385 START TEST nvme_mount 00:04:19.385 ************************************ 00:04:19.385 11:53:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:19.385 11:53:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.385 11:53:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.385 11:53:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.385 11:53:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.386 11:53:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:20.324 Creating new GPT entries in memory. 00:04:20.324 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.324 other utilities. 00:04:20.324 11:53:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.324 11:53:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.324 11:53:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.324 11:53:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.324 11:53:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:21.703 Creating new GPT entries in memory. 00:04:21.703 The operation has completed successfully. 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2007654 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.703 11:53:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.994 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.994 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.994 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.994 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.994 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:24.994 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.254 11:53:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.548 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.549 11:53:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.841 11:53:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.841 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.841 00:04:31.841 real 0m12.285s 00:04:31.841 user 0m3.477s 00:04:31.841 sys 0m6.663s 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:31.841 11:53:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:31.841 ************************************ 00:04:31.841 END TEST nvme_mount 00:04:31.841 ************************************ 00:04:31.841 11:53:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:31.841 11:53:21 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:31.841 11:53:21 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:31.841 11:53:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.841 ************************************ 00:04:31.841 START TEST dm_mount 00:04:31.841 ************************************ 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:31.842 11:53:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:32.780 Creating new GPT entries in memory. 00:04:32.780 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.780 other utilities. 00:04:32.780 11:53:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.780 11:53:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.780 11:53:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.780 11:53:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.780 11:53:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:33.717 Creating new GPT entries in memory. 00:04:33.717 The operation has completed successfully. 00:04:33.717 11:53:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.717 11:53:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.717 11:53:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.717 11:53:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.717 11:53:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:35.096 The operation has completed successfully. 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2012072 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.096 11:53:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.385 11:53:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.919 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.920 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.178 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:41.437 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.437 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:41.437 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.437 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:41.437 11:53:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:41.437 00:04:41.437 real 0m9.561s 00:04:41.437 user 0m2.273s 00:04:41.437 sys 0m4.362s 00:04:41.437 11:53:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.437 11:53:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:41.437 ************************************ 00:04:41.437 END TEST dm_mount 00:04:41.437 ************************************ 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.437 11:53:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.698 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:41.698 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:41.698 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:41.698 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.698 11:53:31 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:41.698 00:04:41.698 real 0m26.192s 00:04:41.698 user 0m7.276s 00:04:41.698 sys 0m13.770s 00:04:41.698 11:53:31 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.698 11:53:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.698 ************************************ 00:04:41.698 END TEST devices 00:04:41.698 ************************************ 00:04:41.698 00:04:41.698 real 1m32.218s 00:04:41.698 user 0m28.255s 00:04:41.698 sys 0m52.722s 00:04:41.698 11:53:31 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.698 11:53:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.698 ************************************ 00:04:41.698 END TEST setup.sh 00:04:41.698 ************************************ 00:04:41.698 11:53:31 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.030 Hugepages 00:04:45.030 node hugesize free / total 00:04:45.030 node0 1048576kB 0 / 0 00:04:45.030 node0 2048kB 2048 / 2048 00:04:45.030 node1 1048576kB 0 / 0 00:04:45.030 node1 2048kB 0 / 0 00:04:45.030 00:04:45.030 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.030 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:45.030 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:45.030 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:45.030 11:53:34 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.030 11:53:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.030 11:53:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.030 11:53:34 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.320 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.320 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.699 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.699 11:53:39 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:50.656 11:53:40 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:50.656 11:53:40 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:50.656 11:53:40 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.656 11:53:40 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:50.656 11:53:40 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:50.656 11:53:40 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:50.656 11:53:40 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.656 11:53:40 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.656 11:53:40 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:50.915 11:53:40 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:50.915 11:53:40 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:04:50.915 11:53:40 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.448 Waiting for block devices as requested 00:04:53.708 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.708 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.708 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.967 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.967 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.967 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.225 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.225 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:54.225 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:54.225 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.484 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.484 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:54.484 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:54.743 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.743 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.743 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.003 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:55.003 11:53:44 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:55.003 11:53:44 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:04:55.003 11:53:44 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:55.003 11:53:44 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:55.003 11:53:44 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:55.003 11:53:44 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:55.003 11:53:44 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:04:55.003 11:53:44 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:55.003 11:53:44 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:55.003 11:53:44 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:55.003 11:53:44 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:55.003 11:53:44 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:55.263 11:53:44 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:55.263 11:53:44 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:55.263 11:53:44 -- common/autotest_common.sh@1556 -- # continue 00:04:55.263 11:53:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:55.263 11:53:44 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:55.263 11:53:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.263 11:53:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:55.263 11:53:44 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:55.263 11:53:44 -- common/autotest_common.sh@10 -- # set +x 00:04:55.263 11:53:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.552 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.552 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.459 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.459 11:53:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:00.459 11:53:49 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:00.459 11:53:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.459 11:53:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:00.459 11:53:49 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:00.459 11:53:49 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.459 11:53:49 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:00.459 11:53:49 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:00.459 11:53:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:00.459 11:53:49 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:00.459 11:53:49 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:00.459 11:53:49 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.459 11:53:49 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.459 11:53:49 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:00.459 11:53:49 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:00.459 11:53:49 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:05:00.459 11:53:49 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:00.459 11:53:49 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:00.459 11:53:49 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:05:00.459 11:53:49 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:00.459 11:53:49 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:05:00.459 11:53:49 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:d8:00.0 00:05:00.459 11:53:49 -- common/autotest_common.sh@1591 -- # [[ -z 0000:d8:00.0 ]] 00:05:00.459 11:53:49 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=2021583 00:05:00.459 11:53:49 -- common/autotest_common.sh@1597 -- # waitforlisten 2021583 00:05:00.459 11:53:49 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.459 11:53:49 -- common/autotest_common.sh@830 -- # '[' -z 2021583 ']' 00:05:00.459 11:53:49 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.459 11:53:49 -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:00.459 11:53:49 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.459 11:53:49 -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:00.459 11:53:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.459 [2024-06-10 11:53:49.817834] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:00.459 [2024-06-10 11:53:49.817889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021583 ] 00:05:00.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.459 [2024-06-10 11:53:49.888696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.459 [2024-06-10 11:53:49.963087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.397 11:53:50 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:01.397 11:53:50 -- common/autotest_common.sh@863 -- # return 0 00:05:01.397 11:53:50 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:05:01.397 11:53:50 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:05:01.397 11:53:50 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:04.685 nvme0n1 00:05:04.685 11:53:53 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:04.685 [2024-06-10 11:53:53.757551] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:04.685 request: 00:05:04.685 { 00:05:04.685 "nvme_ctrlr_name": "nvme0", 00:05:04.685 "password": "test", 00:05:04.685 "method": "bdev_nvme_opal_revert", 00:05:04.685 "req_id": 1 00:05:04.685 } 00:05:04.685 Got JSON-RPC error response 00:05:04.685 response: 00:05:04.685 { 00:05:04.685 "code": -32602, 00:05:04.685 "message": "Invalid parameters" 00:05:04.685 } 00:05:04.685 11:53:53 -- common/autotest_common.sh@1603 -- # true 00:05:04.685 11:53:53 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:05:04.685 11:53:53 -- common/autotest_common.sh@1607 -- # killprocess 2021583 00:05:04.685 11:53:53 -- common/autotest_common.sh@949 -- # '[' -z 2021583 ']' 00:05:04.685 11:53:53 -- common/autotest_common.sh@953 -- # kill -0 2021583 00:05:04.685 11:53:53 -- common/autotest_common.sh@954 -- # uname 00:05:04.685 11:53:53 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:04.685 11:53:53 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2021583 00:05:04.685 11:53:53 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:04.685 11:53:53 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:04.685 11:53:53 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2021583' 00:05:04.685 killing process with pid 2021583 00:05:04.685 11:53:53 -- common/autotest_common.sh@968 -- # kill 2021583 00:05:04.685 11:53:53 -- common/autotest_common.sh@973 -- # wait 2021583 00:05:06.587 11:53:55 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:06.587 11:53:55 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:06.587 11:53:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:06.587 11:53:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:06.587 11:53:55 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:06.587 11:53:55 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:06.587 11:53:55 -- common/autotest_common.sh@10 -- # set +x 00:05:06.587 11:53:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:06.587 11:53:56 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:06.587 11:53:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.587 11:53:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.587 11:53:56 -- common/autotest_common.sh@10 -- # set +x 00:05:06.587 ************************************ 00:05:06.587 START TEST env 00:05:06.587 ************************************ 00:05:06.587 11:53:56 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:06.847 * Looking for test storage... 00:05:06.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:06.847 11:53:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.847 11:53:56 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:06.847 11:53:56 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:06.847 11:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.847 ************************************ 00:05:06.847 START TEST env_memory 00:05:06.847 ************************************ 00:05:06.847 11:53:56 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.847 00:05:06.847 00:05:06.847 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.847 http://cunit.sourceforge.net/ 00:05:06.847 00:05:06.847 00:05:06.847 Suite: memory 00:05:06.847 Test: alloc and free memory map ...[2024-06-10 11:53:56.220985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.847 passed 00:05:06.847 Test: mem map translation ...[2024-06-10 11:53:56.240035] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.847 [2024-06-10 11:53:56.240051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.847 [2024-06-10 11:53:56.240088] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.847 [2024-06-10 11:53:56.240097] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.847 passed 00:05:06.847 Test: mem map registration ...[2024-06-10 11:53:56.276302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.847 [2024-06-10 11:53:56.276316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.847 passed 00:05:06.847 Test: mem map adjacent registrations ...passed 00:05:06.847 00:05:06.847 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.847 suites 1 1 n/a 0 0 00:05:06.847 tests 4 4 4 0 0 00:05:06.847 asserts 152 152 152 0 n/a 00:05:06.847 00:05:06.847 Elapsed time = 0.134 seconds 00:05:06.847 00:05:06.847 real 0m0.148s 00:05:06.847 user 0m0.137s 00:05:06.847 sys 0m0.011s 00:05:06.847 11:53:56 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:06.847 11:53:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.847 ************************************ 00:05:06.847 END TEST env_memory 00:05:06.847 ************************************ 00:05:07.106 11:53:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.106 11:53:56 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:07.106 11:53:56 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:07.106 11:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.106 ************************************ 00:05:07.106 START TEST env_vtophys 00:05:07.106 ************************************ 00:05:07.106 11:53:56 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:07.106 EAL: lib.eal log level changed from notice to debug 00:05:07.106 EAL: Detected lcore 0 as core 0 on socket 0 00:05:07.106 EAL: Detected lcore 1 as core 1 on socket 0 00:05:07.106 EAL: Detected lcore 2 as core 2 on socket 0 00:05:07.106 EAL: Detected lcore 3 as core 3 on socket 0 00:05:07.106 EAL: Detected lcore 4 as core 4 on socket 0 00:05:07.106 EAL: Detected lcore 5 as core 5 on socket 0 00:05:07.106 EAL: Detected lcore 6 as core 6 on socket 0 00:05:07.106 EAL: Detected lcore 7 as core 8 on socket 0 00:05:07.106 EAL: Detected lcore 8 as core 9 on socket 0 00:05:07.106 EAL: Detected lcore 9 as core 10 on socket 0 00:05:07.106 EAL: Detected lcore 10 as core 11 on socket 0 00:05:07.106 EAL: Detected lcore 11 as core 12 on socket 0 00:05:07.106 EAL: Detected lcore 12 as core 13 on socket 0 00:05:07.106 EAL: Detected lcore 13 as core 14 on socket 0 00:05:07.106 EAL: Detected lcore 14 as core 16 on socket 0 00:05:07.106 EAL: Detected lcore 15 as core 17 on socket 0 00:05:07.106 EAL: Detected lcore 16 as core 18 on socket 0 00:05:07.106 EAL: Detected lcore 17 as core 19 on socket 0 00:05:07.106 EAL: Detected lcore 18 as core 20 on socket 0 00:05:07.106 EAL: Detected lcore 19 as core 21 on socket 0 00:05:07.106 EAL: Detected lcore 20 as core 22 on socket 0 00:05:07.106 EAL: Detected lcore 21 as core 24 on socket 0 00:05:07.106 EAL: Detected lcore 22 as core 25 on socket 0 00:05:07.106 EAL: Detected lcore 23 as core 26 on socket 0 00:05:07.106 EAL: Detected lcore 24 as core 27 on socket 0 00:05:07.106 EAL: Detected lcore 25 as core 28 on socket 0 00:05:07.106 EAL: Detected lcore 26 as core 29 on socket 0 00:05:07.106 EAL: Detected lcore 27 as core 30 on socket 0 00:05:07.106 EAL: Detected lcore 28 as core 0 on socket 1 00:05:07.106 EAL: Detected lcore 29 as core 1 on socket 1 00:05:07.106 EAL: Detected lcore 30 as core 2 on socket 1 00:05:07.106 EAL: Detected lcore 31 as core 3 on socket 1 00:05:07.106 EAL: Detected lcore 32 as core 4 on socket 1 00:05:07.106 EAL: Detected lcore 33 as core 5 on socket 1 00:05:07.106 EAL: Detected lcore 34 as core 6 on socket 1 00:05:07.106 EAL: Detected lcore 35 as core 8 on socket 1 00:05:07.106 EAL: Detected lcore 36 as core 9 on socket 1 00:05:07.106 EAL: Detected lcore 37 as core 10 on socket 1 00:05:07.106 EAL: Detected lcore 38 as core 11 on socket 1 00:05:07.106 EAL: Detected lcore 39 as core 12 on socket 1 00:05:07.106 EAL: Detected lcore 40 as core 13 on socket 1 00:05:07.106 EAL: Detected lcore 41 as core 14 on socket 1 00:05:07.106 EAL: Detected lcore 42 as core 16 on socket 1 00:05:07.106 EAL: Detected lcore 43 as core 17 on socket 1 00:05:07.106 EAL: Detected lcore 44 as core 18 on socket 1 00:05:07.106 EAL: Detected lcore 45 as core 19 on socket 1 00:05:07.106 EAL: Detected lcore 46 as core 20 on socket 1 00:05:07.106 EAL: Detected lcore 47 as core 21 on socket 1 00:05:07.106 EAL: Detected lcore 48 as core 22 on socket 1 00:05:07.106 EAL: Detected lcore 49 as core 24 on socket 1 00:05:07.106 EAL: Detected lcore 50 as core 25 on socket 1 00:05:07.106 EAL: Detected lcore 51 as core 26 on socket 1 00:05:07.106 EAL: Detected lcore 52 as core 27 on socket 1 00:05:07.106 EAL: Detected lcore 53 as core 28 on socket 1 00:05:07.106 EAL: Detected lcore 54 as core 29 on socket 1 00:05:07.106 EAL: Detected lcore 55 as core 30 on socket 1 00:05:07.106 EAL: Detected lcore 56 as core 0 on socket 0 00:05:07.106 EAL: Detected lcore 57 as core 1 on socket 0 00:05:07.106 EAL: Detected lcore 58 as core 2 on socket 0 00:05:07.106 EAL: Detected lcore 59 as core 3 on socket 0 00:05:07.106 EAL: Detected lcore 60 as core 4 on socket 0 00:05:07.106 EAL: Detected lcore 61 as core 5 on socket 0 00:05:07.106 EAL: Detected lcore 62 as core 6 on socket 0 00:05:07.106 EAL: Detected lcore 63 as core 8 on socket 0 00:05:07.106 EAL: Detected lcore 64 as core 9 on socket 0 00:05:07.106 EAL: Detected lcore 65 as core 10 on socket 0 00:05:07.106 EAL: Detected lcore 66 as core 11 on socket 0 00:05:07.106 EAL: Detected lcore 67 as core 12 on socket 0 00:05:07.106 EAL: Detected lcore 68 as core 13 on socket 0 00:05:07.106 EAL: Detected lcore 69 as core 14 on socket 0 00:05:07.106 EAL: Detected lcore 70 as core 16 on socket 0 00:05:07.106 EAL: Detected lcore 71 as core 17 on socket 0 00:05:07.106 EAL: Detected lcore 72 as core 18 on socket 0 00:05:07.106 EAL: Detected lcore 73 as core 19 on socket 0 00:05:07.106 EAL: Detected lcore 74 as core 20 on socket 0 00:05:07.106 EAL: Detected lcore 75 as core 21 on socket 0 00:05:07.106 EAL: Detected lcore 76 as core 22 on socket 0 00:05:07.106 EAL: Detected lcore 77 as core 24 on socket 0 00:05:07.106 EAL: Detected lcore 78 as core 25 on socket 0 00:05:07.106 EAL: Detected lcore 79 as core 26 on socket 0 00:05:07.106 EAL: Detected lcore 80 as core 27 on socket 0 00:05:07.106 EAL: Detected lcore 81 as core 28 on socket 0 00:05:07.106 EAL: Detected lcore 82 as core 29 on socket 0 00:05:07.106 EAL: Detected lcore 83 as core 30 on socket 0 00:05:07.106 EAL: Detected lcore 84 as core 0 on socket 1 00:05:07.106 EAL: Detected lcore 85 as core 1 on socket 1 00:05:07.106 EAL: Detected lcore 86 as core 2 on socket 1 00:05:07.106 EAL: Detected lcore 87 as core 3 on socket 1 00:05:07.106 EAL: Detected lcore 88 as core 4 on socket 1 00:05:07.106 EAL: Detected lcore 89 as core 5 on socket 1 00:05:07.106 EAL: Detected lcore 90 as core 6 on socket 1 00:05:07.106 EAL: Detected lcore 91 as core 8 on socket 1 00:05:07.106 EAL: Detected lcore 92 as core 9 on socket 1 00:05:07.106 EAL: Detected lcore 93 as core 10 on socket 1 00:05:07.106 EAL: Detected lcore 94 as core 11 on socket 1 00:05:07.107 EAL: Detected lcore 95 as core 12 on socket 1 00:05:07.107 EAL: Detected lcore 96 as core 13 on socket 1 00:05:07.107 EAL: Detected lcore 97 as core 14 on socket 1 00:05:07.107 EAL: Detected lcore 98 as core 16 on socket 1 00:05:07.107 EAL: Detected lcore 99 as core 17 on socket 1 00:05:07.107 EAL: Detected lcore 100 as core 18 on socket 1 00:05:07.107 EAL: Detected lcore 101 as core 19 on socket 1 00:05:07.107 EAL: Detected lcore 102 as core 20 on socket 1 00:05:07.107 EAL: Detected lcore 103 as core 21 on socket 1 00:05:07.107 EAL: Detected lcore 104 as core 22 on socket 1 00:05:07.107 EAL: Detected lcore 105 as core 24 on socket 1 00:05:07.107 EAL: Detected lcore 106 as core 25 on socket 1 00:05:07.107 EAL: Detected lcore 107 as core 26 on socket 1 00:05:07.107 EAL: Detected lcore 108 as core 27 on socket 1 00:05:07.107 EAL: Detected lcore 109 as core 28 on socket 1 00:05:07.107 EAL: Detected lcore 110 as core 29 on socket 1 00:05:07.107 EAL: Detected lcore 111 as core 30 on socket 1 00:05:07.107 EAL: Maximum logical cores by configuration: 128 00:05:07.107 EAL: Detected CPU lcores: 112 00:05:07.107 EAL: Detected NUMA nodes: 2 00:05:07.107 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:07.107 EAL: Detected shared linkage of DPDK 00:05:07.107 EAL: No shared files mode enabled, IPC will be disabled 00:05:07.107 EAL: Bus pci wants IOVA as 'DC' 00:05:07.107 EAL: Buses did not request a specific IOVA mode. 00:05:07.107 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:07.107 EAL: Selected IOVA mode 'VA' 00:05:07.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.107 EAL: Probing VFIO support... 00:05:07.107 EAL: IOMMU type 1 (Type 1) is supported 00:05:07.107 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:07.107 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:07.107 EAL: VFIO support initialized 00:05:07.107 EAL: Ask a virtual area of 0x2e000 bytes 00:05:07.107 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:07.107 EAL: Setting up physically contiguous memory... 00:05:07.107 EAL: Setting maximum number of open files to 524288 00:05:07.107 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:07.107 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:07.107 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:07.107 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:07.107 EAL: Ask a virtual area of 0x61000 bytes 00:05:07.107 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:07.107 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:07.107 EAL: Ask a virtual area of 0x400000000 bytes 00:05:07.107 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:07.107 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:07.107 EAL: Hugepages will be freed exactly as allocated. 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: TSC frequency is ~2500000 KHz 00:05:07.107 EAL: Main lcore 0 is ready (tid=7f69bf0cfa00;cpuset=[0]) 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 0 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 2MB 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:07.107 EAL: Mem event callback 'spdk:(nil)' registered 00:05:07.107 00:05:07.107 00:05:07.107 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.107 http://cunit.sourceforge.net/ 00:05:07.107 00:05:07.107 00:05:07.107 Suite: components_suite 00:05:07.107 Test: vtophys_malloc_test ...passed 00:05:07.107 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 4 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 4MB 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was shrunk by 4MB 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 4 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 6MB 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was shrunk by 6MB 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 4 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 10MB 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was shrunk by 10MB 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 4 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 18MB 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 4 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.107 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.107 EAL: Restoring previous memory policy: 4 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.107 EAL: request: mp_malloc_sync 00:05:07.107 EAL: No shared files mode enabled, IPC is disabled 00:05:07.107 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.107 EAL: Trying to obtain current memory policy. 00:05:07.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.108 EAL: Restoring previous memory policy: 4 00:05:07.108 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.108 EAL: request: mp_malloc_sync 00:05:07.108 EAL: No shared files mode enabled, IPC is disabled 00:05:07.108 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.108 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.108 EAL: request: mp_malloc_sync 00:05:07.108 EAL: No shared files mode enabled, IPC is disabled 00:05:07.108 EAL: Heap on socket 0 was shrunk by 130MB 00:05:07.108 EAL: Trying to obtain current memory policy. 00:05:07.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.365 EAL: Restoring previous memory policy: 4 00:05:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.365 EAL: request: mp_malloc_sync 00:05:07.365 EAL: No shared files mode enabled, IPC is disabled 00:05:07.365 EAL: Heap on socket 0 was expanded by 258MB 00:05:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.365 EAL: request: mp_malloc_sync 00:05:07.365 EAL: No shared files mode enabled, IPC is disabled 00:05:07.365 EAL: Heap on socket 0 was shrunk by 258MB 00:05:07.365 EAL: Trying to obtain current memory policy. 00:05:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.365 EAL: Restoring previous memory policy: 4 00:05:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.365 EAL: request: mp_malloc_sync 00:05:07.365 EAL: No shared files mode enabled, IPC is disabled 00:05:07.365 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.623 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.623 EAL: request: mp_malloc_sync 00:05:07.623 EAL: No shared files mode enabled, IPC is disabled 00:05:07.623 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.623 EAL: Trying to obtain current memory policy. 00:05:07.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.881 EAL: Restoring previous memory policy: 4 00:05:07.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.881 EAL: request: mp_malloc_sync 00:05:07.881 EAL: No shared files mode enabled, IPC is disabled 00:05:07.881 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.881 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.140 EAL: request: mp_malloc_sync 00:05:08.140 EAL: No shared files mode enabled, IPC is disabled 00:05:08.140 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.140 passed 00:05:08.140 00:05:08.140 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.140 suites 1 1 n/a 0 0 00:05:08.140 tests 2 2 2 0 0 00:05:08.140 asserts 497 497 497 0 n/a 00:05:08.140 00:05:08.140 Elapsed time = 0.963 seconds 00:05:08.140 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.140 EAL: request: mp_malloc_sync 00:05:08.140 EAL: No shared files mode enabled, IPC is disabled 00:05:08.140 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.140 EAL: No shared files mode enabled, IPC is disabled 00:05:08.140 EAL: No shared files mode enabled, IPC is disabled 00:05:08.140 EAL: No shared files mode enabled, IPC is disabled 00:05:08.140 00:05:08.140 real 0m1.095s 00:05:08.140 user 0m0.633s 00:05:08.140 sys 0m0.431s 00:05:08.140 11:53:57 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.140 11:53:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.140 ************************************ 00:05:08.140 END TEST env_vtophys 00:05:08.140 ************************************ 00:05:08.140 11:53:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.140 11:53:57 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:08.140 11:53:57 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:08.140 11:53:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.140 ************************************ 00:05:08.140 START TEST env_pci 00:05:08.140 ************************************ 00:05:08.140 11:53:57 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.140 00:05:08.140 00:05:08.140 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.140 http://cunit.sourceforge.net/ 00:05:08.140 00:05:08.140 00:05:08.140 Suite: pci 00:05:08.140 Test: pci_hook ...[2024-06-10 11:53:57.593653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2023056 has claimed it 00:05:08.140 EAL: Cannot find device (10000:00:01.0) 00:05:08.140 EAL: Failed to attach device on primary process 00:05:08.140 passed 00:05:08.140 00:05:08.140 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.140 suites 1 1 n/a 0 0 00:05:08.140 tests 1 1 1 0 0 00:05:08.140 asserts 25 25 25 0 n/a 00:05:08.140 00:05:08.140 Elapsed time = 0.033 seconds 00:05:08.140 00:05:08.140 real 0m0.054s 00:05:08.140 user 0m0.012s 00:05:08.140 sys 0m0.042s 00:05:08.140 11:53:57 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.140 11:53:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.140 ************************************ 00:05:08.140 END TEST env_pci 00:05:08.140 ************************************ 00:05:08.398 11:53:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.398 11:53:57 env -- env/env.sh@15 -- # uname 00:05:08.398 11:53:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.398 11:53:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.398 11:53:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.398 11:53:57 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:08.398 11:53:57 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:08.398 11:53:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.398 ************************************ 00:05:08.398 START TEST env_dpdk_post_init 00:05:08.398 ************************************ 00:05:08.398 11:53:57 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.398 EAL: Detected CPU lcores: 112 00:05:08.398 EAL: Detected NUMA nodes: 2 00:05:08.398 EAL: Detected shared linkage of DPDK 00:05:08.399 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.399 EAL: Selected IOVA mode 'VA' 00:05:08.399 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.399 EAL: VFIO support initialized 00:05:08.399 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.399 EAL: Using IOMMU type 1 (Type 1) 00:05:08.399 EAL: Ignore mapping IO port bar(1) 00:05:08.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:08.399 EAL: Ignore mapping IO port bar(1) 00:05:08.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:08.399 EAL: Ignore mapping IO port bar(1) 00:05:08.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:08.399 EAL: Ignore mapping IO port bar(1) 00:05:08.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:08.399 EAL: Ignore mapping IO port bar(1) 00:05:08.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:08.658 EAL: Ignore mapping IO port bar(1) 00:05:08.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:09.594 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:12.907 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:12.907 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:13.166 Starting DPDK initialization... 00:05:13.166 Starting SPDK post initialization... 00:05:13.166 SPDK NVMe probe 00:05:13.166 Attaching to 0000:d8:00.0 00:05:13.166 Attached to 0000:d8:00.0 00:05:13.166 Cleaning up... 00:05:13.166 00:05:13.166 real 0m4.960s 00:05:13.166 user 0m3.636s 00:05:13.166 sys 0m0.373s 00:05:13.166 11:54:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.166 11:54:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.166 ************************************ 00:05:13.166 END TEST env_dpdk_post_init 00:05:13.166 ************************************ 00:05:13.427 11:54:02 env -- env/env.sh@26 -- # uname 00:05:13.427 11:54:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.427 11:54:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.427 11:54:02 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.427 11:54:02 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.427 11:54:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.427 ************************************ 00:05:13.427 START TEST env_mem_callbacks 00:05:13.427 ************************************ 00:05:13.427 11:54:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.427 EAL: Detected CPU lcores: 112 00:05:13.427 EAL: Detected NUMA nodes: 2 00:05:13.427 EAL: Detected shared linkage of DPDK 00:05:13.427 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.427 EAL: Selected IOVA mode 'VA' 00:05:13.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.427 EAL: VFIO support initialized 00:05:13.427 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.427 00:05:13.427 00:05:13.427 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.427 http://cunit.sourceforge.net/ 00:05:13.427 00:05:13.427 00:05:13.427 Suite: memory 00:05:13.427 Test: test ... 00:05:13.427 register 0x200000200000 2097152 00:05:13.427 malloc 3145728 00:05:13.427 register 0x200000400000 4194304 00:05:13.427 buf 0x200000500000 len 3145728 PASSED 00:05:13.427 malloc 64 00:05:13.427 buf 0x2000004fff40 len 64 PASSED 00:05:13.427 malloc 4194304 00:05:13.427 register 0x200000800000 6291456 00:05:13.427 buf 0x200000a00000 len 4194304 PASSED 00:05:13.427 free 0x200000500000 3145728 00:05:13.427 free 0x2000004fff40 64 00:05:13.427 unregister 0x200000400000 4194304 PASSED 00:05:13.427 free 0x200000a00000 4194304 00:05:13.427 unregister 0x200000800000 6291456 PASSED 00:05:13.427 malloc 8388608 00:05:13.427 register 0x200000400000 10485760 00:05:13.427 buf 0x200000600000 len 8388608 PASSED 00:05:13.427 free 0x200000600000 8388608 00:05:13.427 unregister 0x200000400000 10485760 PASSED 00:05:13.427 passed 00:05:13.427 00:05:13.427 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.427 suites 1 1 n/a 0 0 00:05:13.427 tests 1 1 1 0 0 00:05:13.427 asserts 15 15 15 0 n/a 00:05:13.427 00:05:13.427 Elapsed time = 0.006 seconds 00:05:13.427 00:05:13.427 real 0m0.069s 00:05:13.427 user 0m0.027s 00:05:13.427 sys 0m0.042s 00:05:13.427 11:54:02 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.427 11:54:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:13.427 ************************************ 00:05:13.427 END TEST env_mem_callbacks 00:05:13.427 ************************************ 00:05:13.427 00:05:13.427 real 0m6.839s 00:05:13.427 user 0m4.651s 00:05:13.427 sys 0m1.245s 00:05:13.427 11:54:02 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.427 11:54:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.427 ************************************ 00:05:13.427 END TEST env 00:05:13.427 ************************************ 00:05:13.427 11:54:02 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:13.427 11:54:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.427 11:54:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.427 11:54:02 -- common/autotest_common.sh@10 -- # set +x 00:05:13.687 ************************************ 00:05:13.687 START TEST rpc 00:05:13.687 ************************************ 00:05:13.687 11:54:02 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:13.687 * Looking for test storage... 00:05:13.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.687 11:54:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2024190 00:05:13.687 11:54:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.687 11:54:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:13.687 11:54:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2024190 00:05:13.687 11:54:03 rpc -- common/autotest_common.sh@830 -- # '[' -z 2024190 ']' 00:05:13.687 11:54:03 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.687 11:54:03 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:13.687 11:54:03 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.687 11:54:03 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:13.687 11:54:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.687 [2024-06-10 11:54:03.117505] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:13.687 [2024-06-10 11:54:03.117555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024190 ] 00:05:13.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.687 [2024-06-10 11:54:03.187200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.947 [2024-06-10 11:54:03.257535] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.947 [2024-06-10 11:54:03.257581] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2024190' to capture a snapshot of events at runtime. 00:05:13.947 [2024-06-10 11:54:03.257590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.947 [2024-06-10 11:54:03.257598] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.947 [2024-06-10 11:54:03.257621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2024190 for offline analysis/debug. 00:05:13.947 [2024-06-10 11:54:03.257643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.515 11:54:03 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:14.515 11:54:03 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:14.515 11:54:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.515 11:54:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.515 11:54:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.515 11:54:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.515 11:54:03 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.515 11:54:03 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.515 11:54:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 ************************************ 00:05:14.515 START TEST rpc_integrity 00:05:14.515 ************************************ 00:05:14.515 11:54:03 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:14.515 11:54:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.515 11:54:03 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.515 11:54:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 11:54:03 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.515 11:54:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.515 11:54:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.515 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.515 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.515 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.515 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.515 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.515 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.515 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.515 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.774 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.774 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.774 { 00:05:14.774 "name": "Malloc0", 00:05:14.774 "aliases": [ 00:05:14.774 "d5550afc-87c5-496c-a03d-64388017026c" 00:05:14.774 ], 00:05:14.774 "product_name": "Malloc disk", 00:05:14.774 "block_size": 512, 00:05:14.774 "num_blocks": 16384, 00:05:14.774 "uuid": "d5550afc-87c5-496c-a03d-64388017026c", 00:05:14.774 "assigned_rate_limits": { 00:05:14.774 "rw_ios_per_sec": 0, 00:05:14.774 "rw_mbytes_per_sec": 0, 00:05:14.774 "r_mbytes_per_sec": 0, 00:05:14.774 "w_mbytes_per_sec": 0 00:05:14.774 }, 00:05:14.774 "claimed": false, 00:05:14.774 "zoned": false, 00:05:14.774 "supported_io_types": { 00:05:14.774 "read": true, 00:05:14.774 "write": true, 00:05:14.774 "unmap": true, 00:05:14.774 "write_zeroes": true, 00:05:14.774 "flush": true, 00:05:14.774 "reset": true, 00:05:14.774 "compare": false, 00:05:14.774 "compare_and_write": false, 00:05:14.774 "abort": true, 00:05:14.774 "nvme_admin": false, 00:05:14.774 "nvme_io": false 00:05:14.774 }, 00:05:14.774 "memory_domains": [ 00:05:14.774 { 00:05:14.774 "dma_device_id": "system", 00:05:14.774 "dma_device_type": 1 00:05:14.774 }, 00:05:14.774 { 00:05:14.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.774 "dma_device_type": 2 00:05:14.774 } 00:05:14.774 ], 00:05:14.774 "driver_specific": {} 00:05:14.774 } 00:05:14.774 ]' 00:05:14.774 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.774 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.774 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.774 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.774 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.775 [2024-06-10 11:54:04.087059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.775 [2024-06-10 11:54:04.087092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.775 [2024-06-10 11:54:04.087104] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3c300 00:05:14.775 [2024-06-10 11:54:04.087113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.775 [2024-06-10 11:54:04.088138] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.775 [2024-06-10 11:54:04.088160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.775 Passthru0 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.775 { 00:05:14.775 "name": "Malloc0", 00:05:14.775 "aliases": [ 00:05:14.775 "d5550afc-87c5-496c-a03d-64388017026c" 00:05:14.775 ], 00:05:14.775 "product_name": "Malloc disk", 00:05:14.775 "block_size": 512, 00:05:14.775 "num_blocks": 16384, 00:05:14.775 "uuid": "d5550afc-87c5-496c-a03d-64388017026c", 00:05:14.775 "assigned_rate_limits": { 00:05:14.775 "rw_ios_per_sec": 0, 00:05:14.775 "rw_mbytes_per_sec": 0, 00:05:14.775 "r_mbytes_per_sec": 0, 00:05:14.775 "w_mbytes_per_sec": 0 00:05:14.775 }, 00:05:14.775 "claimed": true, 00:05:14.775 "claim_type": "exclusive_write", 00:05:14.775 "zoned": false, 00:05:14.775 "supported_io_types": { 00:05:14.775 "read": true, 00:05:14.775 "write": true, 00:05:14.775 "unmap": true, 00:05:14.775 "write_zeroes": true, 00:05:14.775 "flush": true, 00:05:14.775 "reset": true, 00:05:14.775 "compare": false, 00:05:14.775 "compare_and_write": false, 00:05:14.775 "abort": true, 00:05:14.775 "nvme_admin": false, 00:05:14.775 "nvme_io": false 00:05:14.775 }, 00:05:14.775 "memory_domains": [ 00:05:14.775 { 00:05:14.775 "dma_device_id": "system", 00:05:14.775 "dma_device_type": 1 00:05:14.775 }, 00:05:14.775 { 00:05:14.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.775 "dma_device_type": 2 00:05:14.775 } 00:05:14.775 ], 00:05:14.775 "driver_specific": {} 00:05:14.775 }, 00:05:14.775 { 00:05:14.775 "name": "Passthru0", 00:05:14.775 "aliases": [ 00:05:14.775 "617237d8-06d1-5858-85d2-0543cafa932a" 00:05:14.775 ], 00:05:14.775 "product_name": "passthru", 00:05:14.775 "block_size": 512, 00:05:14.775 "num_blocks": 16384, 00:05:14.775 "uuid": "617237d8-06d1-5858-85d2-0543cafa932a", 00:05:14.775 "assigned_rate_limits": { 00:05:14.775 "rw_ios_per_sec": 0, 00:05:14.775 "rw_mbytes_per_sec": 0, 00:05:14.775 "r_mbytes_per_sec": 0, 00:05:14.775 "w_mbytes_per_sec": 0 00:05:14.775 }, 00:05:14.775 "claimed": false, 00:05:14.775 "zoned": false, 00:05:14.775 "supported_io_types": { 00:05:14.775 "read": true, 00:05:14.775 "write": true, 00:05:14.775 "unmap": true, 00:05:14.775 "write_zeroes": true, 00:05:14.775 "flush": true, 00:05:14.775 "reset": true, 00:05:14.775 "compare": false, 00:05:14.775 "compare_and_write": false, 00:05:14.775 "abort": true, 00:05:14.775 "nvme_admin": false, 00:05:14.775 "nvme_io": false 00:05:14.775 }, 00:05:14.775 "memory_domains": [ 00:05:14.775 { 00:05:14.775 "dma_device_id": "system", 00:05:14.775 "dma_device_type": 1 00:05:14.775 }, 00:05:14.775 { 00:05:14.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.775 "dma_device_type": 2 00:05:14.775 } 00:05:14.775 ], 00:05:14.775 "driver_specific": { 00:05:14.775 "passthru": { 00:05:14.775 "name": "Passthru0", 00:05:14.775 "base_bdev_name": "Malloc0" 00:05:14.775 } 00:05:14.775 } 00:05:14.775 } 00:05:14.775 ]' 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.775 11:54:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.775 00:05:14.775 real 0m0.256s 00:05:14.775 user 0m0.152s 00:05:14.775 sys 0m0.046s 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.775 11:54:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.775 ************************************ 00:05:14.775 END TEST rpc_integrity 00:05:14.775 ************************************ 00:05:14.775 11:54:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.775 11:54:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.775 11:54:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.775 11:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 ************************************ 00:05:15.035 START TEST rpc_plugins 00:05:15.035 ************************************ 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.035 { 00:05:15.035 "name": "Malloc1", 00:05:15.035 "aliases": [ 00:05:15.035 "650b7db0-d620-45d9-8387-b947964eebe1" 00:05:15.035 ], 00:05:15.035 "product_name": "Malloc disk", 00:05:15.035 "block_size": 4096, 00:05:15.035 "num_blocks": 256, 00:05:15.035 "uuid": "650b7db0-d620-45d9-8387-b947964eebe1", 00:05:15.035 "assigned_rate_limits": { 00:05:15.035 "rw_ios_per_sec": 0, 00:05:15.035 "rw_mbytes_per_sec": 0, 00:05:15.035 "r_mbytes_per_sec": 0, 00:05:15.035 "w_mbytes_per_sec": 0 00:05:15.035 }, 00:05:15.035 "claimed": false, 00:05:15.035 "zoned": false, 00:05:15.035 "supported_io_types": { 00:05:15.035 "read": true, 00:05:15.035 "write": true, 00:05:15.035 "unmap": true, 00:05:15.035 "write_zeroes": true, 00:05:15.035 "flush": true, 00:05:15.035 "reset": true, 00:05:15.035 "compare": false, 00:05:15.035 "compare_and_write": false, 00:05:15.035 "abort": true, 00:05:15.035 "nvme_admin": false, 00:05:15.035 "nvme_io": false 00:05:15.035 }, 00:05:15.035 "memory_domains": [ 00:05:15.035 { 00:05:15.035 "dma_device_id": "system", 00:05:15.035 "dma_device_type": 1 00:05:15.035 }, 00:05:15.035 { 00:05:15.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.035 "dma_device_type": 2 00:05:15.035 } 00:05:15.035 ], 00:05:15.035 "driver_specific": {} 00:05:15.035 } 00:05:15.035 ]' 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.035 11:54:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.035 00:05:15.035 real 0m0.136s 00:05:15.035 user 0m0.086s 00:05:15.035 sys 0m0.024s 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.035 11:54:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 ************************************ 00:05:15.035 END TEST rpc_plugins 00:05:15.035 ************************************ 00:05:15.035 11:54:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.035 11:54:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.035 11:54:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.035 11:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 ************************************ 00:05:15.035 START TEST rpc_trace_cmd_test 00:05:15.035 ************************************ 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.035 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.035 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2024190", 00:05:15.035 "tpoint_group_mask": "0x8", 00:05:15.035 "iscsi_conn": { 00:05:15.035 "mask": "0x2", 00:05:15.035 "tpoint_mask": "0x0" 00:05:15.035 }, 00:05:15.035 "scsi": { 00:05:15.035 "mask": "0x4", 00:05:15.035 "tpoint_mask": "0x0" 00:05:15.035 }, 00:05:15.035 "bdev": { 00:05:15.035 "mask": "0x8", 00:05:15.035 "tpoint_mask": "0xffffffffffffffff" 00:05:15.035 }, 00:05:15.035 "nvmf_rdma": { 00:05:15.035 "mask": "0x10", 00:05:15.035 "tpoint_mask": "0x0" 00:05:15.035 }, 00:05:15.035 "nvmf_tcp": { 00:05:15.036 "mask": "0x20", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "ftl": { 00:05:15.036 "mask": "0x40", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "blobfs": { 00:05:15.036 "mask": "0x80", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "dsa": { 00:05:15.036 "mask": "0x200", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "thread": { 00:05:15.036 "mask": "0x400", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "nvme_pcie": { 00:05:15.036 "mask": "0x800", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "iaa": { 00:05:15.036 "mask": "0x1000", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "nvme_tcp": { 00:05:15.036 "mask": "0x2000", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "bdev_nvme": { 00:05:15.036 "mask": "0x4000", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 }, 00:05:15.036 "sock": { 00:05:15.036 "mask": "0x8000", 00:05:15.036 "tpoint_mask": "0x0" 00:05:15.036 } 00:05:15.036 }' 00:05:15.036 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.295 00:05:15.295 real 0m0.212s 00:05:15.295 user 0m0.166s 00:05:15.295 sys 0m0.039s 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.295 11:54:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.295 ************************************ 00:05:15.295 END TEST rpc_trace_cmd_test 00:05:15.295 ************************************ 00:05:15.295 11:54:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.295 11:54:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.295 11:54:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.295 11:54:04 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.295 11:54:04 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.295 11:54:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.295 ************************************ 00:05:15.295 START TEST rpc_daemon_integrity 00:05:15.295 ************************************ 00:05:15.295 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:15.295 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.295 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.295 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.295 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.554 { 00:05:15.554 "name": "Malloc2", 00:05:15.554 "aliases": [ 00:05:15.554 "e7deda1b-80fe-43f3-b7d0-b472681cb92c" 00:05:15.554 ], 00:05:15.554 "product_name": "Malloc disk", 00:05:15.554 "block_size": 512, 00:05:15.554 "num_blocks": 16384, 00:05:15.554 "uuid": "e7deda1b-80fe-43f3-b7d0-b472681cb92c", 00:05:15.554 "assigned_rate_limits": { 00:05:15.554 "rw_ios_per_sec": 0, 00:05:15.554 "rw_mbytes_per_sec": 0, 00:05:15.554 "r_mbytes_per_sec": 0, 00:05:15.554 "w_mbytes_per_sec": 0 00:05:15.554 }, 00:05:15.554 "claimed": false, 00:05:15.554 "zoned": false, 00:05:15.554 "supported_io_types": { 00:05:15.554 "read": true, 00:05:15.554 "write": true, 00:05:15.554 "unmap": true, 00:05:15.554 "write_zeroes": true, 00:05:15.554 "flush": true, 00:05:15.554 "reset": true, 00:05:15.554 "compare": false, 00:05:15.554 "compare_and_write": false, 00:05:15.554 "abort": true, 00:05:15.554 "nvme_admin": false, 00:05:15.554 "nvme_io": false 00:05:15.554 }, 00:05:15.554 "memory_domains": [ 00:05:15.554 { 00:05:15.554 "dma_device_id": "system", 00:05:15.554 "dma_device_type": 1 00:05:15.554 }, 00:05:15.554 { 00:05:15.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.554 "dma_device_type": 2 00:05:15.554 } 00:05:15.554 ], 00:05:15.554 "driver_specific": {} 00:05:15.554 } 00:05:15.554 ]' 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.554 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 [2024-06-10 11:54:04.933342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.555 [2024-06-10 11:54:04.933373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.555 [2024-06-10 11:54:04.933390] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3d7a0 00:05:15.555 [2024-06-10 11:54:04.933399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.555 [2024-06-10 11:54:04.934360] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.555 [2024-06-10 11:54:04.934383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.555 Passthru0 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.555 { 00:05:15.555 "name": "Malloc2", 00:05:15.555 "aliases": [ 00:05:15.555 "e7deda1b-80fe-43f3-b7d0-b472681cb92c" 00:05:15.555 ], 00:05:15.555 "product_name": "Malloc disk", 00:05:15.555 "block_size": 512, 00:05:15.555 "num_blocks": 16384, 00:05:15.555 "uuid": "e7deda1b-80fe-43f3-b7d0-b472681cb92c", 00:05:15.555 "assigned_rate_limits": { 00:05:15.555 "rw_ios_per_sec": 0, 00:05:15.555 "rw_mbytes_per_sec": 0, 00:05:15.555 "r_mbytes_per_sec": 0, 00:05:15.555 "w_mbytes_per_sec": 0 00:05:15.555 }, 00:05:15.555 "claimed": true, 00:05:15.555 "claim_type": "exclusive_write", 00:05:15.555 "zoned": false, 00:05:15.555 "supported_io_types": { 00:05:15.555 "read": true, 00:05:15.555 "write": true, 00:05:15.555 "unmap": true, 00:05:15.555 "write_zeroes": true, 00:05:15.555 "flush": true, 00:05:15.555 "reset": true, 00:05:15.555 "compare": false, 00:05:15.555 "compare_and_write": false, 00:05:15.555 "abort": true, 00:05:15.555 "nvme_admin": false, 00:05:15.555 "nvme_io": false 00:05:15.555 }, 00:05:15.555 "memory_domains": [ 00:05:15.555 { 00:05:15.555 "dma_device_id": "system", 00:05:15.555 "dma_device_type": 1 00:05:15.555 }, 00:05:15.555 { 00:05:15.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.555 "dma_device_type": 2 00:05:15.555 } 00:05:15.555 ], 00:05:15.555 "driver_specific": {} 00:05:15.555 }, 00:05:15.555 { 00:05:15.555 "name": "Passthru0", 00:05:15.555 "aliases": [ 00:05:15.555 "729fdf59-61b5-5474-8cca-a07765c301bb" 00:05:15.555 ], 00:05:15.555 "product_name": "passthru", 00:05:15.555 "block_size": 512, 00:05:15.555 "num_blocks": 16384, 00:05:15.555 "uuid": "729fdf59-61b5-5474-8cca-a07765c301bb", 00:05:15.555 "assigned_rate_limits": { 00:05:15.555 "rw_ios_per_sec": 0, 00:05:15.555 "rw_mbytes_per_sec": 0, 00:05:15.555 "r_mbytes_per_sec": 0, 00:05:15.555 "w_mbytes_per_sec": 0 00:05:15.555 }, 00:05:15.555 "claimed": false, 00:05:15.555 "zoned": false, 00:05:15.555 "supported_io_types": { 00:05:15.555 "read": true, 00:05:15.555 "write": true, 00:05:15.555 "unmap": true, 00:05:15.555 "write_zeroes": true, 00:05:15.555 "flush": true, 00:05:15.555 "reset": true, 00:05:15.555 "compare": false, 00:05:15.555 "compare_and_write": false, 00:05:15.555 "abort": true, 00:05:15.555 "nvme_admin": false, 00:05:15.555 "nvme_io": false 00:05:15.555 }, 00:05:15.555 "memory_domains": [ 00:05:15.555 { 00:05:15.555 "dma_device_id": "system", 00:05:15.555 "dma_device_type": 1 00:05:15.555 }, 00:05:15.555 { 00:05:15.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.555 "dma_device_type": 2 00:05:15.555 } 00:05:15.555 ], 00:05:15.555 "driver_specific": { 00:05:15.555 "passthru": { 00:05:15.555 "name": "Passthru0", 00:05:15.555 "base_bdev_name": "Malloc2" 00:05:15.555 } 00:05:15.555 } 00:05:15.555 } 00:05:15.555 ]' 00:05:15.555 11:54:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.555 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.814 11:54:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.814 00:05:15.814 real 0m0.278s 00:05:15.814 user 0m0.171s 00:05:15.814 sys 0m0.051s 00:05:15.814 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.814 11:54:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.814 ************************************ 00:05:15.814 END TEST rpc_daemon_integrity 00:05:15.814 ************************************ 00:05:15.814 11:54:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.814 11:54:05 rpc -- rpc/rpc.sh@84 -- # killprocess 2024190 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@949 -- # '[' -z 2024190 ']' 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@953 -- # kill -0 2024190 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@954 -- # uname 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2024190 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2024190' 00:05:15.814 killing process with pid 2024190 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@968 -- # kill 2024190 00:05:15.814 11:54:05 rpc -- common/autotest_common.sh@973 -- # wait 2024190 00:05:16.074 00:05:16.074 real 0m2.520s 00:05:16.074 user 0m3.193s 00:05:16.074 sys 0m0.791s 00:05:16.074 11:54:05 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.074 11:54:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.074 ************************************ 00:05:16.074 END TEST rpc 00:05:16.074 ************************************ 00:05:16.074 11:54:05 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:16.074 11:54:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:16.074 11:54:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:16.074 11:54:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.074 ************************************ 00:05:16.074 START TEST skip_rpc 00:05:16.074 ************************************ 00:05:16.074 11:54:05 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:16.333 * Looking for test storage... 00:05:16.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.333 11:54:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.333 11:54:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.333 11:54:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:16.333 11:54:05 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:16.333 11:54:05 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:16.333 11:54:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.333 ************************************ 00:05:16.333 START TEST skip_rpc 00:05:16.333 ************************************ 00:05:16.333 11:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:16.333 11:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2025308 00:05:16.333 11:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.333 11:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:16.333 11:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:16.333 [2024-06-10 11:54:05.738603] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:16.333 [2024-06-10 11:54:05.738651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025308 ] 00:05:16.333 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.333 [2024-06-10 11:54:05.806450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.592 [2024-06-10 11:54:05.878868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.866 11:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:21.866 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:21.866 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:21.866 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:21.866 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:21.866 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2025308 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 2025308 ']' 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 2025308 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2025308 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2025308' 00:05:21.867 killing process with pid 2025308 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 2025308 00:05:21.867 11:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 2025308 00:05:21.867 00:05:21.867 real 0m5.378s 00:05:21.867 user 0m5.137s 00:05:21.867 sys 0m0.279s 00:05:21.867 11:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.867 11:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 ************************************ 00:05:21.867 END TEST skip_rpc 00:05:21.867 ************************************ 00:05:21.867 11:54:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.867 11:54:11 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.867 11:54:11 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.867 11:54:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 ************************************ 00:05:21.867 START TEST skip_rpc_with_json 00:05:21.867 ************************************ 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2026257 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2026257 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 2026257 ']' 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:21.867 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.867 [2024-06-10 11:54:11.194275] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:21.867 [2024-06-10 11:54:11.194323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026257 ] 00:05:21.867 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.867 [2024-06-10 11:54:11.264536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.867 [2024-06-10 11:54:11.338394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.810 [2024-06-10 11:54:11.992037] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.810 request: 00:05:22.810 { 00:05:22.810 "trtype": "tcp", 00:05:22.810 "method": "nvmf_get_transports", 00:05:22.810 "req_id": 1 00:05:22.810 } 00:05:22.810 Got JSON-RPC error response 00:05:22.810 response: 00:05:22.810 { 00:05:22.810 "code": -19, 00:05:22.810 "message": "No such device" 00:05:22.810 } 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.810 11:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.810 [2024-06-10 11:54:12.004142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.810 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.810 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.810 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.810 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.810 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.810 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.810 { 00:05:22.810 "subsystems": [ 00:05:22.810 { 00:05:22.810 "subsystem": "vfio_user_target", 00:05:22.810 "config": null 00:05:22.810 }, 00:05:22.810 { 00:05:22.810 "subsystem": "keyring", 00:05:22.810 "config": [] 00:05:22.810 }, 00:05:22.810 { 00:05:22.810 "subsystem": "iobuf", 00:05:22.810 "config": [ 00:05:22.810 { 00:05:22.811 "method": "iobuf_set_options", 00:05:22.811 "params": { 00:05:22.811 "small_pool_count": 8192, 00:05:22.811 "large_pool_count": 1024, 00:05:22.811 "small_bufsize": 8192, 00:05:22.811 "large_bufsize": 135168 00:05:22.811 } 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "sock", 00:05:22.811 "config": [ 00:05:22.811 { 00:05:22.811 "method": "sock_set_default_impl", 00:05:22.811 "params": { 00:05:22.811 "impl_name": "posix" 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "sock_impl_set_options", 00:05:22.811 "params": { 00:05:22.811 "impl_name": "ssl", 00:05:22.811 "recv_buf_size": 4096, 00:05:22.811 "send_buf_size": 4096, 00:05:22.811 "enable_recv_pipe": true, 00:05:22.811 "enable_quickack": false, 00:05:22.811 "enable_placement_id": 0, 00:05:22.811 "enable_zerocopy_send_server": true, 00:05:22.811 "enable_zerocopy_send_client": false, 00:05:22.811 "zerocopy_threshold": 0, 00:05:22.811 "tls_version": 0, 00:05:22.811 "enable_ktls": false 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "sock_impl_set_options", 00:05:22.811 "params": { 00:05:22.811 "impl_name": "posix", 00:05:22.811 "recv_buf_size": 2097152, 00:05:22.811 "send_buf_size": 2097152, 00:05:22.811 "enable_recv_pipe": true, 00:05:22.811 "enable_quickack": false, 00:05:22.811 "enable_placement_id": 0, 00:05:22.811 "enable_zerocopy_send_server": true, 00:05:22.811 "enable_zerocopy_send_client": false, 00:05:22.811 "zerocopy_threshold": 0, 00:05:22.811 "tls_version": 0, 00:05:22.811 "enable_ktls": false 00:05:22.811 } 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "vmd", 00:05:22.811 "config": [] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "accel", 00:05:22.811 "config": [ 00:05:22.811 { 00:05:22.811 "method": "accel_set_options", 00:05:22.811 "params": { 00:05:22.811 "small_cache_size": 128, 00:05:22.811 "large_cache_size": 16, 00:05:22.811 "task_count": 2048, 00:05:22.811 "sequence_count": 2048, 00:05:22.811 "buf_count": 2048 00:05:22.811 } 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "bdev", 00:05:22.811 "config": [ 00:05:22.811 { 00:05:22.811 "method": "bdev_set_options", 00:05:22.811 "params": { 00:05:22.811 "bdev_io_pool_size": 65535, 00:05:22.811 "bdev_io_cache_size": 256, 00:05:22.811 "bdev_auto_examine": true, 00:05:22.811 "iobuf_small_cache_size": 128, 00:05:22.811 "iobuf_large_cache_size": 16 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "bdev_raid_set_options", 00:05:22.811 "params": { 00:05:22.811 "process_window_size_kb": 1024 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "bdev_iscsi_set_options", 00:05:22.811 "params": { 00:05:22.811 "timeout_sec": 30 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "bdev_nvme_set_options", 00:05:22.811 "params": { 00:05:22.811 "action_on_timeout": "none", 00:05:22.811 "timeout_us": 0, 00:05:22.811 "timeout_admin_us": 0, 00:05:22.811 "keep_alive_timeout_ms": 10000, 00:05:22.811 "arbitration_burst": 0, 00:05:22.811 "low_priority_weight": 0, 00:05:22.811 "medium_priority_weight": 0, 00:05:22.811 "high_priority_weight": 0, 00:05:22.811 "nvme_adminq_poll_period_us": 10000, 00:05:22.811 "nvme_ioq_poll_period_us": 0, 00:05:22.811 "io_queue_requests": 0, 00:05:22.811 "delay_cmd_submit": true, 00:05:22.811 "transport_retry_count": 4, 00:05:22.811 "bdev_retry_count": 3, 00:05:22.811 "transport_ack_timeout": 0, 00:05:22.811 "ctrlr_loss_timeout_sec": 0, 00:05:22.811 "reconnect_delay_sec": 0, 00:05:22.811 "fast_io_fail_timeout_sec": 0, 00:05:22.811 "disable_auto_failback": false, 00:05:22.811 "generate_uuids": false, 00:05:22.811 "transport_tos": 0, 00:05:22.811 "nvme_error_stat": false, 00:05:22.811 "rdma_srq_size": 0, 00:05:22.811 "io_path_stat": false, 00:05:22.811 "allow_accel_sequence": false, 00:05:22.811 "rdma_max_cq_size": 0, 00:05:22.811 "rdma_cm_event_timeout_ms": 0, 00:05:22.811 "dhchap_digests": [ 00:05:22.811 "sha256", 00:05:22.811 "sha384", 00:05:22.811 "sha512" 00:05:22.811 ], 00:05:22.811 "dhchap_dhgroups": [ 00:05:22.811 "null", 00:05:22.811 "ffdhe2048", 00:05:22.811 "ffdhe3072", 00:05:22.811 "ffdhe4096", 00:05:22.811 "ffdhe6144", 00:05:22.811 "ffdhe8192" 00:05:22.811 ] 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "bdev_nvme_set_hotplug", 00:05:22.811 "params": { 00:05:22.811 "period_us": 100000, 00:05:22.811 "enable": false 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "bdev_wait_for_examine" 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "scsi", 00:05:22.811 "config": null 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "scheduler", 00:05:22.811 "config": [ 00:05:22.811 { 00:05:22.811 "method": "framework_set_scheduler", 00:05:22.811 "params": { 00:05:22.811 "name": "static" 00:05:22.811 } 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "vhost_scsi", 00:05:22.811 "config": [] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "vhost_blk", 00:05:22.811 "config": [] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "ublk", 00:05:22.811 "config": [] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "nbd", 00:05:22.811 "config": [] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "nvmf", 00:05:22.811 "config": [ 00:05:22.811 { 00:05:22.811 "method": "nvmf_set_config", 00:05:22.811 "params": { 00:05:22.811 "discovery_filter": "match_any", 00:05:22.811 "admin_cmd_passthru": { 00:05:22.811 "identify_ctrlr": false 00:05:22.811 } 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "nvmf_set_max_subsystems", 00:05:22.811 "params": { 00:05:22.811 "max_subsystems": 1024 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "nvmf_set_crdt", 00:05:22.811 "params": { 00:05:22.811 "crdt1": 0, 00:05:22.811 "crdt2": 0, 00:05:22.811 "crdt3": 0 00:05:22.811 } 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "method": "nvmf_create_transport", 00:05:22.811 "params": { 00:05:22.811 "trtype": "TCP", 00:05:22.811 "max_queue_depth": 128, 00:05:22.811 "max_io_qpairs_per_ctrlr": 127, 00:05:22.811 "in_capsule_data_size": 4096, 00:05:22.811 "max_io_size": 131072, 00:05:22.811 "io_unit_size": 131072, 00:05:22.811 "max_aq_depth": 128, 00:05:22.811 "num_shared_buffers": 511, 00:05:22.811 "buf_cache_size": 4294967295, 00:05:22.811 "dif_insert_or_strip": false, 00:05:22.811 "zcopy": false, 00:05:22.811 "c2h_success": true, 00:05:22.811 "sock_priority": 0, 00:05:22.811 "abort_timeout_sec": 1, 00:05:22.811 "ack_timeout": 0, 00:05:22.811 "data_wr_pool_size": 0 00:05:22.811 } 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 }, 00:05:22.811 { 00:05:22.811 "subsystem": "iscsi", 00:05:22.811 "config": [ 00:05:22.811 { 00:05:22.811 "method": "iscsi_set_options", 00:05:22.811 "params": { 00:05:22.811 "node_base": "iqn.2016-06.io.spdk", 00:05:22.811 "max_sessions": 128, 00:05:22.811 "max_connections_per_session": 2, 00:05:22.811 "max_queue_depth": 64, 00:05:22.811 "default_time2wait": 2, 00:05:22.811 "default_time2retain": 20, 00:05:22.811 "first_burst_length": 8192, 00:05:22.811 "immediate_data": true, 00:05:22.811 "allow_duplicated_isid": false, 00:05:22.811 "error_recovery_level": 0, 00:05:22.811 "nop_timeout": 60, 00:05:22.811 "nop_in_interval": 30, 00:05:22.811 "disable_chap": false, 00:05:22.811 "require_chap": false, 00:05:22.811 "mutual_chap": false, 00:05:22.811 "chap_group": 0, 00:05:22.811 "max_large_datain_per_connection": 64, 00:05:22.811 "max_r2t_per_connection": 4, 00:05:22.811 "pdu_pool_size": 36864, 00:05:22.811 "immediate_data_pool_size": 16384, 00:05:22.811 "data_out_pool_size": 2048 00:05:22.811 } 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 } 00:05:22.811 ] 00:05:22.811 } 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2026257 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2026257 ']' 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2026257 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2026257 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2026257' 00:05:22.811 killing process with pid 2026257 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2026257 00:05:22.811 11:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2026257 00:05:23.071 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2026452 00:05:23.071 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:23.071 11:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2026452 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2026452 ']' 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2026452 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2026452 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2026452' 00:05:28.352 killing process with pid 2026452 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2026452 00:05:28.352 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2026452 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.612 00:05:28.612 real 0m6.768s 00:05:28.612 user 0m6.551s 00:05:28.612 sys 0m0.665s 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.612 ************************************ 00:05:28.612 END TEST skip_rpc_with_json 00:05:28.612 ************************************ 00:05:28.612 11:54:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.612 11:54:17 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.612 11:54:17 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.612 11:54:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.612 ************************************ 00:05:28.612 START TEST skip_rpc_with_delay 00:05:28.612 ************************************ 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.612 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.613 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.613 11:54:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.613 [2024-06-10 11:54:18.046313] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.613 [2024-06-10 11:54:18.046379] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:28.613 11:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:28.613 11:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:28.613 11:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:28.613 11:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:28.613 00:05:28.613 real 0m0.069s 00:05:28.613 user 0m0.041s 00:05:28.613 sys 0m0.028s 00:05:28.613 11:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.613 11:54:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:28.613 ************************************ 00:05:28.613 END TEST skip_rpc_with_delay 00:05:28.613 ************************************ 00:05:28.613 11:54:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:28.613 11:54:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:28.613 11:54:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:28.613 11:54:18 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.613 11:54:18 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.613 11:54:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.873 ************************************ 00:05:28.873 START TEST exit_on_failed_rpc_init 00:05:28.873 ************************************ 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2027544 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2027544 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 2027544 ']' 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.873 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.873 [2024-06-10 11:54:18.181652] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:28.873 [2024-06-10 11:54:18.181698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027544 ] 00:05:28.873 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.873 [2024-06-10 11:54:18.247805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.873 [2024-06-10 11:54:18.322647] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.811 11:54:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.811 [2024-06-10 11:54:19.038391] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:29.811 [2024-06-10 11:54:19.038443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027724 ] 00:05:29.811 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.811 [2024-06-10 11:54:19.106971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.811 [2024-06-10 11:54:19.174836] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.811 [2024-06-10 11:54:19.174905] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:29.811 [2024-06-10 11:54:19.174916] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:29.811 [2024-06-10 11:54:19.174924] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2027544 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 2027544 ']' 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 2027544 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2027544 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2027544' 00:05:29.811 killing process with pid 2027544 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 2027544 00:05:29.811 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 2027544 00:05:30.380 00:05:30.380 real 0m1.466s 00:05:30.380 user 0m1.667s 00:05:30.380 sys 0m0.431s 00:05:30.380 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.380 11:54:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.380 ************************************ 00:05:30.380 END TEST exit_on_failed_rpc_init 00:05:30.380 ************************************ 00:05:30.380 11:54:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.380 00:05:30.380 real 0m14.108s 00:05:30.380 user 0m13.540s 00:05:30.380 sys 0m1.721s 00:05:30.380 11:54:19 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.380 11:54:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.381 ************************************ 00:05:30.381 END TEST skip_rpc 00:05:30.381 ************************************ 00:05:30.381 11:54:19 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:30.381 11:54:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.381 11:54:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.381 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.381 ************************************ 00:05:30.381 START TEST rpc_client 00:05:30.381 ************************************ 00:05:30.381 11:54:19 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:30.381 * Looking for test storage... 00:05:30.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:30.381 11:54:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:30.381 OK 00:05:30.381 11:54:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:30.381 00:05:30.381 real 0m0.115s 00:05:30.381 user 0m0.042s 00:05:30.381 sys 0m0.082s 00:05:30.381 11:54:19 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.381 11:54:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:30.381 ************************************ 00:05:30.381 END TEST rpc_client 00:05:30.381 ************************************ 00:05:30.381 11:54:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.381 11:54:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.381 11:54:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.381 11:54:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.641 ************************************ 00:05:30.641 START TEST json_config 00:05:30.641 ************************************ 00:05:30.641 11:54:19 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.641 11:54:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.641 11:54:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.641 11:54:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.641 11:54:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.641 11:54:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.641 11:54:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.641 11:54:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:30.641 11:54:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@47 -- # : 0 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.641 11:54:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:30.641 INFO: JSON configuration test init 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.641 11:54:20 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:30.641 11:54:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.641 11:54:20 json_config -- json_config/common.sh@10 -- # shift 00:05:30.641 11:54:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.641 11:54:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.641 11:54:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.641 11:54:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.641 11:54:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.641 11:54:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2027934 00:05:30.641 11:54:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.641 Waiting for target to run... 00:05:30.641 11:54:20 json_config -- json_config/common.sh@25 -- # waitforlisten 2027934 /var/tmp/spdk_tgt.sock 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@830 -- # '[' -z 2027934 ']' 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:30.641 11:54:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:30.641 11:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.641 [2024-06-10 11:54:20.100851] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:30.641 [2024-06-10 11:54:20.100905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027934 ] 00:05:30.641 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.210 [2024-06-10 11:54:20.523906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.210 [2024-06-10 11:54:20.608931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.469 11:54:20 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:31.469 11:54:20 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:31.469 11:54:20 json_config -- json_config/common.sh@26 -- # echo '' 00:05:31.469 00:05:31.469 11:54:20 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:31.469 11:54:20 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:31.469 11:54:20 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:31.469 11:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.469 11:54:20 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:31.469 11:54:20 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:31.469 11:54:20 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:31.469 11:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.469 11:54:20 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:31.469 11:54:20 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:31.469 11:54:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:34.757 11:54:24 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:34.757 11:54:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:34.757 11:54:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:34.757 11:54:24 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:34.757 11:54:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:34.757 11:54:24 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:34.757 11:54:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.757 11:54:24 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:34.758 11:54:24 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:34.758 11:54:24 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.758 11:54:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.016 MallocForNvmf0 00:05:35.016 11:54:24 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.016 11:54:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.275 MallocForNvmf1 00:05:35.276 11:54:24 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.276 11:54:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.276 [2024-06-10 11:54:24.708620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.276 11:54:24 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.276 11:54:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:35.535 11:54:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.535 11:54:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:35.794 11:54:25 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.794 11:54:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.794 11:54:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.794 11:54:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.053 [2024-06-10 11:54:25.394777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:36.053 11:54:25 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:36.053 11:54:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:36.053 11:54:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.053 11:54:25 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:36.053 11:54:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:36.053 11:54:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.053 11:54:25 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:36.053 11:54:25 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.053 11:54:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.313 MallocBdevForConfigChangeCheck 00:05:36.313 11:54:25 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:36.313 11:54:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:36.313 11:54:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.313 11:54:25 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:36.313 11:54:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.572 11:54:25 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:36.572 INFO: shutting down applications... 00:05:36.572 11:54:25 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:36.572 11:54:25 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:36.572 11:54:25 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:36.572 11:54:25 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.107 Calling clear_iscsi_subsystem 00:05:39.107 Calling clear_nvmf_subsystem 00:05:39.107 Calling clear_nbd_subsystem 00:05:39.107 Calling clear_ublk_subsystem 00:05:39.107 Calling clear_vhost_blk_subsystem 00:05:39.107 Calling clear_vhost_scsi_subsystem 00:05:39.107 Calling clear_bdev_subsystem 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@345 -- # break 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:39.107 11:54:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:39.107 11:54:28 json_config -- json_config/common.sh@31 -- # local app=target 00:05:39.107 11:54:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.107 11:54:28 json_config -- json_config/common.sh@35 -- # [[ -n 2027934 ]] 00:05:39.107 11:54:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2027934 00:05:39.107 11:54:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.107 11:54:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.107 11:54:28 json_config -- json_config/common.sh@41 -- # kill -0 2027934 00:05:39.107 11:54:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.676 11:54:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.676 11:54:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.676 11:54:29 json_config -- json_config/common.sh@41 -- # kill -0 2027934 00:05:39.676 11:54:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.676 11:54:29 json_config -- json_config/common.sh@43 -- # break 00:05:39.676 11:54:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.676 11:54:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.676 SPDK target shutdown done 00:05:39.676 11:54:29 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:39.676 INFO: relaunching applications... 00:05:39.676 11:54:29 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.676 11:54:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:39.676 11:54:29 json_config -- json_config/common.sh@10 -- # shift 00:05:39.676 11:54:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.676 11:54:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.676 11:54:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.676 11:54:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.676 11:54:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.676 11:54:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2029659 00:05:39.676 11:54:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.676 Waiting for target to run... 00:05:39.676 11:54:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.676 11:54:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2029659 /var/tmp/spdk_tgt.sock 00:05:39.676 11:54:29 json_config -- common/autotest_common.sh@830 -- # '[' -z 2029659 ']' 00:05:39.676 11:54:29 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.676 11:54:29 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:39.676 11:54:29 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.676 11:54:29 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:39.676 11:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.676 [2024-06-10 11:54:29.058821] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:39.676 [2024-06-10 11:54:29.058882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029659 ] 00:05:39.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.935 [2024-06-10 11:54:29.343851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.935 [2024-06-10 11:54:29.403557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.336 [2024-06-10 11:54:32.430503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.336 [2024-06-10 11:54:32.462856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.336 11:54:32 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:43.336 11:54:32 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:43.336 11:54:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:43.336 00:05:43.336 11:54:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:43.336 11:54:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:43.336 INFO: Checking if target configuration is the same... 00:05:43.336 11:54:32 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.336 11:54:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:43.336 11:54:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.336 + '[' 2 -ne 2 ']' 00:05:43.336 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.336 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:43.336 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.336 +++ basename /dev/fd/62 00:05:43.336 ++ mktemp /tmp/62.XXX 00:05:43.336 + tmp_file_1=/tmp/62.GB1 00:05:43.336 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.336 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.336 + tmp_file_2=/tmp/spdk_tgt_config.json.EsW 00:05:43.336 + ret=0 00:05:43.336 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.336 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.336 + diff -u /tmp/62.GB1 /tmp/spdk_tgt_config.json.EsW 00:05:43.336 + echo 'INFO: JSON config files are the same' 00:05:43.336 INFO: JSON config files are the same 00:05:43.336 + rm /tmp/62.GB1 /tmp/spdk_tgt_config.json.EsW 00:05:43.595 + exit 0 00:05:43.595 11:54:32 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:43.595 11:54:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:43.595 INFO: changing configuration and checking if this can be detected... 00:05:43.595 11:54:32 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.595 11:54:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.595 11:54:33 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.595 11:54:33 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:43.595 11:54:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.595 + '[' 2 -ne 2 ']' 00:05:43.595 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.595 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:43.595 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.595 +++ basename /dev/fd/62 00:05:43.595 ++ mktemp /tmp/62.XXX 00:05:43.595 + tmp_file_1=/tmp/62.Ldv 00:05:43.595 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.595 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.595 + tmp_file_2=/tmp/spdk_tgt_config.json.F1F 00:05:43.595 + ret=0 00:05:43.595 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.853 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.112 + diff -u /tmp/62.Ldv /tmp/spdk_tgt_config.json.F1F 00:05:44.112 + ret=1 00:05:44.112 + echo '=== Start of file: /tmp/62.Ldv ===' 00:05:44.112 + cat /tmp/62.Ldv 00:05:44.112 + echo '=== End of file: /tmp/62.Ldv ===' 00:05:44.112 + echo '' 00:05:44.112 + echo '=== Start of file: /tmp/spdk_tgt_config.json.F1F ===' 00:05:44.112 + cat /tmp/spdk_tgt_config.json.F1F 00:05:44.112 + echo '=== End of file: /tmp/spdk_tgt_config.json.F1F ===' 00:05:44.112 + echo '' 00:05:44.112 + rm /tmp/62.Ldv /tmp/spdk_tgt_config.json.F1F 00:05:44.112 + exit 1 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:44.112 INFO: configuration change detected. 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:44.112 11:54:33 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:44.112 11:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@317 -- # [[ -n 2029659 ]] 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:44.112 11:54:33 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:44.112 11:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:44.112 11:54:33 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:44.113 11:54:33 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.113 11:54:33 json_config -- json_config/json_config.sh@323 -- # killprocess 2029659 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@949 -- # '[' -z 2029659 ']' 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@953 -- # kill -0 2029659 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@954 -- # uname 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2029659 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2029659' 00:05:44.113 killing process with pid 2029659 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@968 -- # kill 2029659 00:05:44.113 11:54:33 json_config -- common/autotest_common.sh@973 -- # wait 2029659 00:05:46.648 11:54:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.648 11:54:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:46.648 11:54:35 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:46.648 11:54:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 11:54:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:46.648 11:54:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:46.648 INFO: Success 00:05:46.648 00:05:46.648 real 0m15.722s 00:05:46.648 user 0m16.040s 00:05:46.648 sys 0m2.171s 00:05:46.648 11:54:35 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.648 11:54:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 ************************************ 00:05:46.648 END TEST json_config 00:05:46.648 ************************************ 00:05:46.648 11:54:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.648 11:54:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.648 11:54:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.648 11:54:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 ************************************ 00:05:46.648 START TEST json_config_extra_key 00:05:46.648 ************************************ 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.648 11:54:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.648 11:54:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.648 11:54:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.648 11:54:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.648 11:54:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.648 11:54:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.648 11:54:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.648 11:54:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.648 11:54:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.648 INFO: launching applications... 00:05:46.648 11:54:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2031006 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.648 Waiting for target to run... 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2031006 /var/tmp/spdk_tgt.sock 00:05:46.648 11:54:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 2031006 ']' 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.648 11:54:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 [2024-06-10 11:54:35.895854] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:46.648 [2024-06-10 11:54:35.895909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031006 ] 00:05:46.648 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.907 [2024-06-10 11:54:36.336039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.907 [2024-06-10 11:54:36.425532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.476 11:54:36 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.476 11:54:36 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.476 00:05:47.476 11:54:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.476 INFO: shutting down applications... 00:05:47.476 11:54:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2031006 ]] 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2031006 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2031006 00:05:47.476 11:54:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2031006 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.736 11:54:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.736 SPDK target shutdown done 00:05:47.736 11:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.736 Success 00:05:47.736 00:05:47.736 real 0m1.466s 00:05:47.736 user 0m1.060s 00:05:47.736 sys 0m0.549s 00:05:47.736 11:54:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.736 11:54:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.736 ************************************ 00:05:47.736 END TEST json_config_extra_key 00:05:47.736 ************************************ 00:05:47.736 11:54:37 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.736 11:54:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.736 11:54:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.736 11:54:37 -- common/autotest_common.sh@10 -- # set +x 00:05:47.995 ************************************ 00:05:47.995 START TEST alias_rpc 00:05:47.995 ************************************ 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.995 * Looking for test storage... 00:05:47.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:47.995 11:54:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.995 11:54:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2031343 00:05:47.995 11:54:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2031343 00:05:47.995 11:54:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 2031343 ']' 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:47.995 11:54:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.996 [2024-06-10 11:54:37.445910] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:47.996 [2024-06-10 11:54:37.445960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031343 ] 00:05:47.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.255 [2024-06-10 11:54:37.516433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.255 [2024-06-10 11:54:37.586349] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.823 11:54:38 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:48.823 11:54:38 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:48.823 11:54:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:49.082 11:54:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2031343 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 2031343 ']' 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 2031343 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2031343 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2031343' 00:05:49.082 killing process with pid 2031343 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@968 -- # kill 2031343 00:05:49.082 11:54:38 alias_rpc -- common/autotest_common.sh@973 -- # wait 2031343 00:05:49.342 00:05:49.342 real 0m1.494s 00:05:49.342 user 0m1.558s 00:05:49.342 sys 0m0.461s 00:05:49.342 11:54:38 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.342 11:54:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.342 ************************************ 00:05:49.342 END TEST alias_rpc 00:05:49.342 ************************************ 00:05:49.342 11:54:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:49.342 11:54:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.342 11:54:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:49.342 11:54:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.342 11:54:38 -- common/autotest_common.sh@10 -- # set +x 00:05:49.342 ************************************ 00:05:49.342 START TEST spdkcli_tcp 00:05:49.342 ************************************ 00:05:49.342 11:54:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.601 * Looking for test storage... 00:05:49.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2031691 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2031691 00:05:49.601 11:54:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 2031691 ']' 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:49.601 11:54:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.601 [2024-06-10 11:54:39.020009] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:49.601 [2024-06-10 11:54:39.020063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031691 ] 00:05:49.601 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.601 [2024-06-10 11:54:39.089723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.860 [2024-06-10 11:54:39.159376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.860 [2024-06-10 11:54:39.159379] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.428 11:54:39 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:50.428 11:54:39 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:50.428 11:54:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.428 11:54:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2031768 00:05:50.428 11:54:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.688 [ 00:05:50.688 "bdev_malloc_delete", 00:05:50.688 "bdev_malloc_create", 00:05:50.688 "bdev_null_resize", 00:05:50.688 "bdev_null_delete", 00:05:50.688 "bdev_null_create", 00:05:50.688 "bdev_nvme_cuse_unregister", 00:05:50.688 "bdev_nvme_cuse_register", 00:05:50.688 "bdev_opal_new_user", 00:05:50.688 "bdev_opal_set_lock_state", 00:05:50.688 "bdev_opal_delete", 00:05:50.688 "bdev_opal_get_info", 00:05:50.688 "bdev_opal_create", 00:05:50.688 "bdev_nvme_opal_revert", 00:05:50.688 "bdev_nvme_opal_init", 00:05:50.688 "bdev_nvme_send_cmd", 00:05:50.688 "bdev_nvme_get_path_iostat", 00:05:50.688 "bdev_nvme_get_mdns_discovery_info", 00:05:50.688 "bdev_nvme_stop_mdns_discovery", 00:05:50.688 "bdev_nvme_start_mdns_discovery", 00:05:50.688 "bdev_nvme_set_multipath_policy", 00:05:50.688 "bdev_nvme_set_preferred_path", 00:05:50.688 "bdev_nvme_get_io_paths", 00:05:50.688 "bdev_nvme_remove_error_injection", 00:05:50.688 "bdev_nvme_add_error_injection", 00:05:50.688 "bdev_nvme_get_discovery_info", 00:05:50.688 "bdev_nvme_stop_discovery", 00:05:50.688 "bdev_nvme_start_discovery", 00:05:50.688 "bdev_nvme_get_controller_health_info", 00:05:50.688 "bdev_nvme_disable_controller", 00:05:50.688 "bdev_nvme_enable_controller", 00:05:50.688 "bdev_nvme_reset_controller", 00:05:50.688 "bdev_nvme_get_transport_statistics", 00:05:50.688 "bdev_nvme_apply_firmware", 00:05:50.688 "bdev_nvme_detach_controller", 00:05:50.688 "bdev_nvme_get_controllers", 00:05:50.688 "bdev_nvme_attach_controller", 00:05:50.688 "bdev_nvme_set_hotplug", 00:05:50.688 "bdev_nvme_set_options", 00:05:50.688 "bdev_passthru_delete", 00:05:50.688 "bdev_passthru_create", 00:05:50.688 "bdev_lvol_set_parent_bdev", 00:05:50.688 "bdev_lvol_set_parent", 00:05:50.688 "bdev_lvol_check_shallow_copy", 00:05:50.688 "bdev_lvol_start_shallow_copy", 00:05:50.688 "bdev_lvol_grow_lvstore", 00:05:50.688 "bdev_lvol_get_lvols", 00:05:50.688 "bdev_lvol_get_lvstores", 00:05:50.688 "bdev_lvol_delete", 00:05:50.688 "bdev_lvol_set_read_only", 00:05:50.688 "bdev_lvol_resize", 00:05:50.688 "bdev_lvol_decouple_parent", 00:05:50.688 "bdev_lvol_inflate", 00:05:50.688 "bdev_lvol_rename", 00:05:50.688 "bdev_lvol_clone_bdev", 00:05:50.688 "bdev_lvol_clone", 00:05:50.688 "bdev_lvol_snapshot", 00:05:50.688 "bdev_lvol_create", 00:05:50.688 "bdev_lvol_delete_lvstore", 00:05:50.688 "bdev_lvol_rename_lvstore", 00:05:50.688 "bdev_lvol_create_lvstore", 00:05:50.688 "bdev_raid_set_options", 00:05:50.688 "bdev_raid_remove_base_bdev", 00:05:50.688 "bdev_raid_add_base_bdev", 00:05:50.688 "bdev_raid_delete", 00:05:50.688 "bdev_raid_create", 00:05:50.688 "bdev_raid_get_bdevs", 00:05:50.688 "bdev_error_inject_error", 00:05:50.688 "bdev_error_delete", 00:05:50.688 "bdev_error_create", 00:05:50.688 "bdev_split_delete", 00:05:50.688 "bdev_split_create", 00:05:50.688 "bdev_delay_delete", 00:05:50.688 "bdev_delay_create", 00:05:50.688 "bdev_delay_update_latency", 00:05:50.688 "bdev_zone_block_delete", 00:05:50.688 "bdev_zone_block_create", 00:05:50.688 "blobfs_create", 00:05:50.688 "blobfs_detect", 00:05:50.688 "blobfs_set_cache_size", 00:05:50.688 "bdev_aio_delete", 00:05:50.688 "bdev_aio_rescan", 00:05:50.688 "bdev_aio_create", 00:05:50.688 "bdev_ftl_set_property", 00:05:50.688 "bdev_ftl_get_properties", 00:05:50.688 "bdev_ftl_get_stats", 00:05:50.688 "bdev_ftl_unmap", 00:05:50.688 "bdev_ftl_unload", 00:05:50.688 "bdev_ftl_delete", 00:05:50.688 "bdev_ftl_load", 00:05:50.688 "bdev_ftl_create", 00:05:50.688 "bdev_virtio_attach_controller", 00:05:50.688 "bdev_virtio_scsi_get_devices", 00:05:50.688 "bdev_virtio_detach_controller", 00:05:50.688 "bdev_virtio_blk_set_hotplug", 00:05:50.688 "bdev_iscsi_delete", 00:05:50.688 "bdev_iscsi_create", 00:05:50.688 "bdev_iscsi_set_options", 00:05:50.688 "accel_error_inject_error", 00:05:50.688 "ioat_scan_accel_module", 00:05:50.688 "dsa_scan_accel_module", 00:05:50.688 "iaa_scan_accel_module", 00:05:50.688 "vfu_virtio_create_scsi_endpoint", 00:05:50.688 "vfu_virtio_scsi_remove_target", 00:05:50.688 "vfu_virtio_scsi_add_target", 00:05:50.688 "vfu_virtio_create_blk_endpoint", 00:05:50.688 "vfu_virtio_delete_endpoint", 00:05:50.688 "keyring_file_remove_key", 00:05:50.688 "keyring_file_add_key", 00:05:50.688 "keyring_linux_set_options", 00:05:50.688 "iscsi_get_histogram", 00:05:50.688 "iscsi_enable_histogram", 00:05:50.688 "iscsi_set_options", 00:05:50.688 "iscsi_get_auth_groups", 00:05:50.688 "iscsi_auth_group_remove_secret", 00:05:50.688 "iscsi_auth_group_add_secret", 00:05:50.688 "iscsi_delete_auth_group", 00:05:50.688 "iscsi_create_auth_group", 00:05:50.688 "iscsi_set_discovery_auth", 00:05:50.688 "iscsi_get_options", 00:05:50.688 "iscsi_target_node_request_logout", 00:05:50.688 "iscsi_target_node_set_redirect", 00:05:50.688 "iscsi_target_node_set_auth", 00:05:50.688 "iscsi_target_node_add_lun", 00:05:50.688 "iscsi_get_stats", 00:05:50.688 "iscsi_get_connections", 00:05:50.688 "iscsi_portal_group_set_auth", 00:05:50.688 "iscsi_start_portal_group", 00:05:50.688 "iscsi_delete_portal_group", 00:05:50.688 "iscsi_create_portal_group", 00:05:50.688 "iscsi_get_portal_groups", 00:05:50.688 "iscsi_delete_target_node", 00:05:50.688 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.688 "iscsi_target_node_add_pg_ig_maps", 00:05:50.688 "iscsi_create_target_node", 00:05:50.688 "iscsi_get_target_nodes", 00:05:50.688 "iscsi_delete_initiator_group", 00:05:50.688 "iscsi_initiator_group_remove_initiators", 00:05:50.688 "iscsi_initiator_group_add_initiators", 00:05:50.689 "iscsi_create_initiator_group", 00:05:50.689 "iscsi_get_initiator_groups", 00:05:50.689 "nvmf_set_crdt", 00:05:50.689 "nvmf_set_config", 00:05:50.689 "nvmf_set_max_subsystems", 00:05:50.689 "nvmf_stop_mdns_prr", 00:05:50.689 "nvmf_publish_mdns_prr", 00:05:50.689 "nvmf_subsystem_get_listeners", 00:05:50.689 "nvmf_subsystem_get_qpairs", 00:05:50.689 "nvmf_subsystem_get_controllers", 00:05:50.689 "nvmf_get_stats", 00:05:50.689 "nvmf_get_transports", 00:05:50.689 "nvmf_create_transport", 00:05:50.689 "nvmf_get_targets", 00:05:50.689 "nvmf_delete_target", 00:05:50.689 "nvmf_create_target", 00:05:50.689 "nvmf_subsystem_allow_any_host", 00:05:50.689 "nvmf_subsystem_remove_host", 00:05:50.689 "nvmf_subsystem_add_host", 00:05:50.689 "nvmf_ns_remove_host", 00:05:50.689 "nvmf_ns_add_host", 00:05:50.689 "nvmf_subsystem_remove_ns", 00:05:50.689 "nvmf_subsystem_add_ns", 00:05:50.689 "nvmf_subsystem_listener_set_ana_state", 00:05:50.689 "nvmf_discovery_get_referrals", 00:05:50.689 "nvmf_discovery_remove_referral", 00:05:50.689 "nvmf_discovery_add_referral", 00:05:50.689 "nvmf_subsystem_remove_listener", 00:05:50.689 "nvmf_subsystem_add_listener", 00:05:50.689 "nvmf_delete_subsystem", 00:05:50.689 "nvmf_create_subsystem", 00:05:50.689 "nvmf_get_subsystems", 00:05:50.689 "env_dpdk_get_mem_stats", 00:05:50.689 "nbd_get_disks", 00:05:50.689 "nbd_stop_disk", 00:05:50.689 "nbd_start_disk", 00:05:50.689 "ublk_recover_disk", 00:05:50.689 "ublk_get_disks", 00:05:50.689 "ublk_stop_disk", 00:05:50.689 "ublk_start_disk", 00:05:50.689 "ublk_destroy_target", 00:05:50.689 "ublk_create_target", 00:05:50.689 "virtio_blk_create_transport", 00:05:50.689 "virtio_blk_get_transports", 00:05:50.689 "vhost_controller_set_coalescing", 00:05:50.689 "vhost_get_controllers", 00:05:50.689 "vhost_delete_controller", 00:05:50.689 "vhost_create_blk_controller", 00:05:50.689 "vhost_scsi_controller_remove_target", 00:05:50.689 "vhost_scsi_controller_add_target", 00:05:50.689 "vhost_start_scsi_controller", 00:05:50.689 "vhost_create_scsi_controller", 00:05:50.689 "thread_set_cpumask", 00:05:50.689 "framework_get_scheduler", 00:05:50.689 "framework_set_scheduler", 00:05:50.689 "framework_get_reactors", 00:05:50.689 "thread_get_io_channels", 00:05:50.689 "thread_get_pollers", 00:05:50.689 "thread_get_stats", 00:05:50.689 "framework_monitor_context_switch", 00:05:50.689 "spdk_kill_instance", 00:05:50.689 "log_enable_timestamps", 00:05:50.689 "log_get_flags", 00:05:50.689 "log_clear_flag", 00:05:50.689 "log_set_flag", 00:05:50.689 "log_get_level", 00:05:50.689 "log_set_level", 00:05:50.689 "log_get_print_level", 00:05:50.689 "log_set_print_level", 00:05:50.689 "framework_enable_cpumask_locks", 00:05:50.689 "framework_disable_cpumask_locks", 00:05:50.689 "framework_wait_init", 00:05:50.689 "framework_start_init", 00:05:50.689 "scsi_get_devices", 00:05:50.689 "bdev_get_histogram", 00:05:50.689 "bdev_enable_histogram", 00:05:50.689 "bdev_set_qos_limit", 00:05:50.689 "bdev_set_qd_sampling_period", 00:05:50.689 "bdev_get_bdevs", 00:05:50.689 "bdev_reset_iostat", 00:05:50.689 "bdev_get_iostat", 00:05:50.689 "bdev_examine", 00:05:50.689 "bdev_wait_for_examine", 00:05:50.689 "bdev_set_options", 00:05:50.689 "notify_get_notifications", 00:05:50.689 "notify_get_types", 00:05:50.689 "accel_get_stats", 00:05:50.689 "accel_set_options", 00:05:50.689 "accel_set_driver", 00:05:50.689 "accel_crypto_key_destroy", 00:05:50.689 "accel_crypto_keys_get", 00:05:50.689 "accel_crypto_key_create", 00:05:50.689 "accel_assign_opc", 00:05:50.689 "accel_get_module_info", 00:05:50.689 "accel_get_opc_assignments", 00:05:50.689 "vmd_rescan", 00:05:50.689 "vmd_remove_device", 00:05:50.689 "vmd_enable", 00:05:50.689 "sock_get_default_impl", 00:05:50.689 "sock_set_default_impl", 00:05:50.689 "sock_impl_set_options", 00:05:50.689 "sock_impl_get_options", 00:05:50.689 "iobuf_get_stats", 00:05:50.689 "iobuf_set_options", 00:05:50.689 "keyring_get_keys", 00:05:50.689 "framework_get_pci_devices", 00:05:50.689 "framework_get_config", 00:05:50.689 "framework_get_subsystems", 00:05:50.689 "vfu_tgt_set_base_path", 00:05:50.689 "trace_get_info", 00:05:50.689 "trace_get_tpoint_group_mask", 00:05:50.689 "trace_disable_tpoint_group", 00:05:50.689 "trace_enable_tpoint_group", 00:05:50.689 "trace_clear_tpoint_mask", 00:05:50.689 "trace_set_tpoint_mask", 00:05:50.689 "spdk_get_version", 00:05:50.689 "rpc_get_methods" 00:05:50.689 ] 00:05:50.689 11:54:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.689 11:54:39 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:50.689 11:54:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.689 11:54:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.689 11:54:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2031691 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 2031691 ']' 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 2031691 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2031691 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2031691' 00:05:50.689 killing process with pid 2031691 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 2031691 00:05:50.689 11:54:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 2031691 00:05:50.948 00:05:50.948 real 0m1.545s 00:05:50.948 user 0m2.786s 00:05:50.948 sys 0m0.510s 00:05:50.948 11:54:40 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.948 11:54:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.948 ************************************ 00:05:50.948 END TEST spdkcli_tcp 00:05:50.948 ************************************ 00:05:50.948 11:54:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.948 11:54:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.948 11:54:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.948 11:54:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.208 ************************************ 00:05:51.208 START TEST dpdk_mem_utility 00:05:51.208 ************************************ 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.208 * Looking for test storage... 00:05:51.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:51.208 11:54:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.208 11:54:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2032068 00:05:51.208 11:54:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2032068 00:05:51.208 11:54:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 2032068 ']' 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.208 11:54:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.208 [2024-06-10 11:54:40.625878] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:51.208 [2024-06-10 11:54:40.625928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032068 ] 00:05:51.208 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.208 [2024-06-10 11:54:40.694137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.467 [2024-06-10 11:54:40.764312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.036 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.036 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:52.036 11:54:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.036 11:54:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.036 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.036 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.036 { 00:05:52.036 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.036 } 00:05:52.036 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.036 11:54:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.036 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:52.036 1 heaps totaling size 814.000000 MiB 00:05:52.036 size: 814.000000 MiB heap id: 0 00:05:52.036 end heaps---------- 00:05:52.036 8 mempools totaling size 598.116089 MiB 00:05:52.036 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.036 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.036 size: 84.521057 MiB name: bdev_io_2032068 00:05:52.036 size: 51.011292 MiB name: evtpool_2032068 00:05:52.036 size: 50.003479 MiB name: msgpool_2032068 00:05:52.036 size: 21.763794 MiB name: PDU_Pool 00:05:52.036 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.036 size: 0.026123 MiB name: Session_Pool 00:05:52.036 end mempools------- 00:05:52.036 6 memzones totaling size 4.142822 MiB 00:05:52.036 size: 1.000366 MiB name: RG_ring_0_2032068 00:05:52.036 size: 1.000366 MiB name: RG_ring_1_2032068 00:05:52.036 size: 1.000366 MiB name: RG_ring_4_2032068 00:05:52.036 size: 1.000366 MiB name: RG_ring_5_2032068 00:05:52.036 size: 0.125366 MiB name: RG_ring_2_2032068 00:05:52.036 size: 0.015991 MiB name: RG_ring_3_2032068 00:05:52.036 end memzones------- 00:05:52.036 11:54:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.036 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:52.036 list of free elements. size: 12.519348 MiB 00:05:52.036 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:52.036 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:52.036 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:52.036 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:52.036 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:52.036 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:52.036 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:52.036 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:52.036 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:52.036 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:52.036 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:52.036 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:52.036 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:52.036 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:52.036 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:52.036 list of standard malloc elements. size: 199.218079 MiB 00:05:52.036 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:52.036 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:52.036 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:52.036 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:52.036 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:52.036 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.036 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:52.036 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.036 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:52.036 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:52.036 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:52.036 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:52.036 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:52.036 list of memzone associated elements. size: 602.262573 MiB 00:05:52.036 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:52.036 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.036 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:52.036 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.036 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:52.036 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2032068_0 00:05:52.036 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:52.036 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2032068_0 00:05:52.036 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:52.036 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2032068_0 00:05:52.036 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:52.036 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.036 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:52.036 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.036 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:52.036 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2032068 00:05:52.036 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:52.036 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2032068 00:05:52.036 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.036 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2032068 00:05:52.036 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:52.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.036 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:52.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.036 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:52.036 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.036 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:52.036 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.036 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:52.036 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2032068 00:05:52.036 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:52.036 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2032068 00:05:52.036 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:52.036 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2032068 00:05:52.036 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:52.036 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2032068 00:05:52.036 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:52.036 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2032068 00:05:52.036 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:52.036 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.036 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:52.037 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.037 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:52.037 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.037 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:52.037 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2032068 00:05:52.037 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:52.037 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.037 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:52.037 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.037 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:52.037 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2032068 00:05:52.037 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:52.037 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.037 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:52.037 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2032068 00:05:52.037 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:52.037 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2032068 00:05:52.037 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:52.037 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.037 11:54:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.037 11:54:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2032068 00:05:52.037 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 2032068 ']' 00:05:52.037 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 2032068 00:05:52.037 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:52.037 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:52.037 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2032068 00:05:52.296 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:52.296 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:52.296 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2032068' 00:05:52.296 killing process with pid 2032068 00:05:52.296 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 2032068 00:05:52.296 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 2032068 00:05:52.556 00:05:52.556 real 0m1.405s 00:05:52.556 user 0m1.445s 00:05:52.556 sys 0m0.430s 00:05:52.556 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.556 11:54:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.556 ************************************ 00:05:52.556 END TEST dpdk_mem_utility 00:05:52.556 ************************************ 00:05:52.556 11:54:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:52.556 11:54:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:52.556 11:54:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:52.556 11:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:52.556 ************************************ 00:05:52.556 START TEST event 00:05:52.556 ************************************ 00:05:52.556 11:54:41 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:52.556 * Looking for test storage... 00:05:52.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:52.556 11:54:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:52.556 11:54:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.556 11:54:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.556 11:54:42 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:52.556 11:54:42 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:52.556 11:54:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.818 ************************************ 00:05:52.818 START TEST event_perf 00:05:52.818 ************************************ 00:05:52.818 11:54:42 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.818 Running I/O for 1 seconds...[2024-06-10 11:54:42.100752] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:52.818 [2024-06-10 11:54:42.100799] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032384 ] 00:05:52.818 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.818 [2024-06-10 11:54:42.170985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.818 [2024-06-10 11:54:42.243915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.818 [2024-06-10 11:54:42.244012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.818 [2024-06-10 11:54:42.244097] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.818 [2024-06-10 11:54:42.244099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.198 Running I/O for 1 seconds... 00:05:54.198 lcore 0: 209633 00:05:54.198 lcore 1: 209634 00:05:54.198 lcore 2: 209633 00:05:54.198 lcore 3: 209633 00:05:54.198 done. 00:05:54.198 00:05:54.198 real 0m1.222s 00:05:54.198 user 0m4.132s 00:05:54.198 sys 0m0.088s 00:05:54.198 11:54:43 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.198 11:54:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.198 ************************************ 00:05:54.198 END TEST event_perf 00:05:54.198 ************************************ 00:05:54.198 11:54:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.198 11:54:43 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:54.198 11:54:43 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.198 11:54:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.198 ************************************ 00:05:54.198 START TEST event_reactor 00:05:54.198 ************************************ 00:05:54.198 11:54:43 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.198 [2024-06-10 11:54:43.414695] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:54.198 [2024-06-10 11:54:43.414763] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032547 ] 00:05:54.198 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.198 [2024-06-10 11:54:43.486873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.198 [2024-06-10 11:54:43.555979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.135 test_start 00:05:55.135 oneshot 00:05:55.135 tick 100 00:05:55.135 tick 100 00:05:55.135 tick 250 00:05:55.135 tick 100 00:05:55.135 tick 100 00:05:55.135 tick 250 00:05:55.135 tick 100 00:05:55.135 tick 500 00:05:55.135 tick 100 00:05:55.135 tick 100 00:05:55.135 tick 250 00:05:55.135 tick 100 00:05:55.135 tick 100 00:05:55.135 test_end 00:05:55.135 00:05:55.135 real 0m1.231s 00:05:55.135 user 0m1.136s 00:05:55.135 sys 0m0.091s 00:05:55.135 11:54:44 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.135 11:54:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 ************************************ 00:05:55.135 END TEST event_reactor 00:05:55.136 ************************************ 00:05:55.395 11:54:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.395 11:54:44 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:55.395 11:54:44 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.395 11:54:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.395 ************************************ 00:05:55.395 START TEST event_reactor_perf 00:05:55.395 ************************************ 00:05:55.395 11:54:44 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.395 [2024-06-10 11:54:44.727586] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:55.395 [2024-06-10 11:54:44.727656] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032739 ] 00:05:55.395 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.395 [2024-06-10 11:54:44.801768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.395 [2024-06-10 11:54:44.873318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.774 test_start 00:05:56.774 test_end 00:05:56.774 Performance: 526885 events per second 00:05:56.774 00:05:56.774 real 0m1.235s 00:05:56.774 user 0m1.142s 00:05:56.774 sys 0m0.090s 00:05:56.774 11:54:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.774 11:54:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.774 ************************************ 00:05:56.774 END TEST event_reactor_perf 00:05:56.774 ************************************ 00:05:56.774 11:54:45 event -- event/event.sh@49 -- # uname -s 00:05:56.775 11:54:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.775 11:54:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.775 11:54:45 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.775 11:54:45 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.775 11:54:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.775 ************************************ 00:05:56.775 START TEST event_scheduler 00:05:56.775 ************************************ 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.775 * Looking for test storage... 00:05:56.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:56.775 11:54:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.775 11:54:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2033055 00:05:56.775 11:54:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.775 11:54:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.775 11:54:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2033055 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 2033055 ']' 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.775 11:54:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.775 [2024-06-10 11:54:46.182408] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:05:56.775 [2024-06-10 11:54:46.182463] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2033055 ] 00:05:56.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.775 [2024-06-10 11:54:46.251416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.034 [2024-06-10 11:54:46.331034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.034 [2024-06-10 11:54:46.331116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.034 [2024-06-10 11:54:46.331199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.034 [2024-06-10 11:54:46.331201] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.603 11:54:46 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:57.603 11:54:46 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:57.603 11:54:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.603 11:54:46 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.603 11:54:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.603 POWER: Env isn't set yet! 00:05:57.603 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:57.603 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.603 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.603 POWER: Attempting to initialise PSTAT power management... 00:05:57.603 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:57.603 POWER: Initialized successfully for lcore 0 power management 00:05:57.603 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:57.603 POWER: Initialized successfully for lcore 1 power management 00:05:57.603 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:57.603 POWER: Initialized successfully for lcore 2 power management 00:05:57.603 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:57.603 POWER: Initialized successfully for lcore 3 power management 00:05:57.603 [2024-06-10 11:54:47.043636] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.603 [2024-06-10 11:54:47.043652] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.603 [2024-06-10 11:54:47.043662] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.603 11:54:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.603 [2024-06-10 11:54:47.115965] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.603 11:54:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.603 11:54:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 ************************************ 00:05:57.863 START TEST scheduler_create_thread 00:05:57.863 ************************************ 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 2 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 3 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 4 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 5 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 6 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 7 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 8 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 9 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 10 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.863 11:54:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.801 11:54:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.801 11:54:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.801 11:54:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.801 11:54:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.179 11:54:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:00.179 11:54:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:00.179 11:54:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:00.179 11:54:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:00.179 11:54:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.116 11:54:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:01.116 00:06:01.116 real 0m3.383s 00:06:01.116 user 0m0.021s 00:06:01.116 sys 0m0.010s 00:06:01.116 11:54:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.116 11:54:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.116 ************************************ 00:06:01.116 END TEST scheduler_create_thread 00:06:01.116 ************************************ 00:06:01.116 11:54:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:01.116 11:54:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2033055 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 2033055 ']' 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 2033055 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2033055 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:01.116 11:54:50 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2033055' 00:06:01.116 killing process with pid 2033055 00:06:01.117 11:54:50 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 2033055 00:06:01.117 11:54:50 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 2033055 00:06:01.685 [2024-06-10 11:54:50.920171] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.685 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:01.685 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:01.685 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:01.685 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:01.685 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:01.685 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:01.685 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:01.685 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:01.685 00:06:01.685 real 0m5.123s 00:06:01.685 user 0m10.473s 00:06:01.685 sys 0m0.427s 00:06:01.685 11:54:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.685 11:54:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.685 ************************************ 00:06:01.685 END TEST event_scheduler 00:06:01.685 ************************************ 00:06:01.685 11:54:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.685 11:54:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.685 11:54:51 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:01.685 11:54:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.685 11:54:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.956 ************************************ 00:06:01.956 START TEST app_repeat 00:06:01.956 ************************************ 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2034055 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2034055' 00:06:01.956 Process app_repeat pid: 2034055 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.956 spdk_app_start Round 0 00:06:01.956 11:54:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2034055 /var/tmp/spdk-nbd.sock 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2034055 ']' 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.956 11:54:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.956 [2024-06-10 11:54:51.265837] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:01.956 [2024-06-10 11:54:51.265896] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034055 ] 00:06:01.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.956 [2024-06-10 11:54:51.336883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.956 [2024-06-10 11:54:51.416365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.956 [2024-06-10 11:54:51.416370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.892 11:54:52 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:02.892 11:54:52 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:02.892 11:54:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.892 Malloc0 00:06:02.892 11:54:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.150 Malloc1 00:06:03.150 11:54:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.150 /dev/nbd0 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.150 1+0 records in 00:06:03.150 1+0 records out 00:06:03.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025174 s, 16.3 MB/s 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:03.150 11:54:52 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.150 11:54:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.409 /dev/nbd1 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.409 1+0 records in 00:06:03.409 1+0 records out 00:06:03.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143372 s, 28.6 MB/s 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:03.409 11:54:52 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.409 11:54:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.668 { 00:06:03.668 "nbd_device": "/dev/nbd0", 00:06:03.668 "bdev_name": "Malloc0" 00:06:03.668 }, 00:06:03.668 { 00:06:03.668 "nbd_device": "/dev/nbd1", 00:06:03.668 "bdev_name": "Malloc1" 00:06:03.668 } 00:06:03.668 ]' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.668 { 00:06:03.668 "nbd_device": "/dev/nbd0", 00:06:03.668 "bdev_name": "Malloc0" 00:06:03.668 }, 00:06:03.668 { 00:06:03.668 "nbd_device": "/dev/nbd1", 00:06:03.668 "bdev_name": "Malloc1" 00:06:03.668 } 00:06:03.668 ]' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.668 /dev/nbd1' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.668 /dev/nbd1' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.668 256+0 records in 00:06:03.668 256+0 records out 00:06:03.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114457 s, 91.6 MB/s 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.668 256+0 records in 00:06:03.668 256+0 records out 00:06:03.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198091 s, 52.9 MB/s 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.668 256+0 records in 00:06:03.668 256+0 records out 00:06:03.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165999 s, 63.2 MB/s 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.668 11:54:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.927 11:54:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.186 11:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.516 11:54:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.516 11:54:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.516 11:54:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.787 [2024-06-10 11:54:54.158361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.787 [2024-06-10 11:54:54.221770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.787 [2024-06-10 11:54:54.221772] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.787 [2024-06-10 11:54:54.262316] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.787 [2024-06-10 11:54:54.262358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.078 11:54:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.078 11:54:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:08.078 spdk_app_start Round 1 00:06:08.078 11:54:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2034055 /var/tmp/spdk-nbd.sock 00:06:08.078 11:54:56 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2034055 ']' 00:06:08.078 11:54:56 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.078 11:54:56 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:08.078 11:54:56 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.078 11:54:56 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:08.078 11:54:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.078 11:54:57 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:08.078 11:54:57 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:08.078 11:54:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.078 Malloc0 00:06:08.078 11:54:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.078 Malloc1 00:06:08.078 11:54:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.078 11:54:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.337 /dev/nbd0 00:06:08.337 11:54:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.337 11:54:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.337 11:54:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:08.337 11:54:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:08.337 11:54:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:08.337 11:54:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:08.337 11:54:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:08.337 11:54:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.338 1+0 records in 00:06:08.338 1+0 records out 00:06:08.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212044 s, 19.3 MB/s 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:08.338 11:54:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:08.338 11:54:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.338 11:54:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.338 11:54:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.597 /dev/nbd1 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.597 1+0 records in 00:06:08.597 1+0 records out 00:06:08.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258476 s, 15.8 MB/s 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:08.597 11:54:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.597 11:54:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.597 11:54:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.597 { 00:06:08.597 "nbd_device": "/dev/nbd0", 00:06:08.597 "bdev_name": "Malloc0" 00:06:08.597 }, 00:06:08.597 { 00:06:08.597 "nbd_device": "/dev/nbd1", 00:06:08.597 "bdev_name": "Malloc1" 00:06:08.597 } 00:06:08.597 ]' 00:06:08.597 11:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.597 { 00:06:08.597 "nbd_device": "/dev/nbd0", 00:06:08.597 "bdev_name": "Malloc0" 00:06:08.597 }, 00:06:08.597 { 00:06:08.597 "nbd_device": "/dev/nbd1", 00:06:08.597 "bdev_name": "Malloc1" 00:06:08.597 } 00:06:08.597 ]' 00:06:08.597 11:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.856 /dev/nbd1' 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.856 /dev/nbd1' 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.856 256+0 records in 00:06:08.856 256+0 records out 00:06:08.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114978 s, 91.2 MB/s 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.856 256+0 records in 00:06:08.856 256+0 records out 00:06:08.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171083 s, 61.3 MB/s 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.856 256+0 records in 00:06:08.856 256+0 records out 00:06:08.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211729 s, 49.5 MB/s 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.856 11:54:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.857 11:54:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.116 11:54:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.375 11:54:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.375 11:54:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.634 11:54:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.894 [2024-06-10 11:54:59.220509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.894 [2024-06-10 11:54:59.282793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.894 [2024-06-10 11:54:59.282796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.894 [2024-06-10 11:54:59.324601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.894 [2024-06-10 11:54:59.324655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.185 11:55:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.185 11:55:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.185 spdk_app_start Round 2 00:06:13.185 11:55:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2034055 /var/tmp/spdk-nbd.sock 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2034055 ']' 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:13.185 11:55:02 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:13.186 11:55:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.186 Malloc0 00:06:13.186 11:55:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.186 Malloc1 00:06:13.186 11:55:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.186 11:55:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.444 /dev/nbd0 00:06:13.444 11:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.444 11:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.444 1+0 records in 00:06:13.444 1+0 records out 00:06:13.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223075 s, 18.4 MB/s 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:13.444 11:55:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:13.444 11:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.444 11:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.444 11:55:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.703 /dev/nbd1 00:06:13.703 11:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.703 11:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.703 11:55:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:13.703 11:55:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:13.703 11:55:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:13.703 11:55:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:13.703 11:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.703 1+0 records in 00:06:13.703 1+0 records out 00:06:13.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211755 s, 19.3 MB/s 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:13.703 11:55:03 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.703 { 00:06:13.703 "nbd_device": "/dev/nbd0", 00:06:13.703 "bdev_name": "Malloc0" 00:06:13.703 }, 00:06:13.703 { 00:06:13.703 "nbd_device": "/dev/nbd1", 00:06:13.703 "bdev_name": "Malloc1" 00:06:13.703 } 00:06:13.703 ]' 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.703 { 00:06:13.703 "nbd_device": "/dev/nbd0", 00:06:13.703 "bdev_name": "Malloc0" 00:06:13.703 }, 00:06:13.703 { 00:06:13.703 "nbd_device": "/dev/nbd1", 00:06:13.703 "bdev_name": "Malloc1" 00:06:13.703 } 00:06:13.703 ]' 00:06:13.703 11:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.963 /dev/nbd1' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.963 /dev/nbd1' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.963 256+0 records in 00:06:13.963 256+0 records out 00:06:13.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105727 s, 99.2 MB/s 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.963 256+0 records in 00:06:13.963 256+0 records out 00:06:13.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199862 s, 52.5 MB/s 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.963 256+0 records in 00:06:13.963 256+0 records out 00:06:13.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172044 s, 60.9 MB/s 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.963 11:55:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.222 11:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.481 11:55:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.481 11:55:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.740 11:55:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.999 [2024-06-10 11:55:04.311585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.999 [2024-06-10 11:55:04.373676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.999 [2024-06-10 11:55:04.373679] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.999 [2024-06-10 11:55:04.414252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.999 [2024-06-10 11:55:04.414293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.300 11:55:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2034055 /var/tmp/spdk-nbd.sock 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2034055 ']' 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:18.300 11:55:07 event.app_repeat -- event/event.sh@39 -- # killprocess 2034055 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 2034055 ']' 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 2034055 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2034055 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2034055' 00:06:18.300 killing process with pid 2034055 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@968 -- # kill 2034055 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@973 -- # wait 2034055 00:06:18.300 spdk_app_start is called in Round 0. 00:06:18.300 Shutdown signal received, stop current app iteration 00:06:18.300 Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 reinitialization... 00:06:18.300 spdk_app_start is called in Round 1. 00:06:18.300 Shutdown signal received, stop current app iteration 00:06:18.300 Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 reinitialization... 00:06:18.300 spdk_app_start is called in Round 2. 00:06:18.300 Shutdown signal received, stop current app iteration 00:06:18.300 Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 reinitialization... 00:06:18.300 spdk_app_start is called in Round 3. 00:06:18.300 Shutdown signal received, stop current app iteration 00:06:18.300 11:55:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.300 11:55:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.300 00:06:18.300 real 0m16.285s 00:06:18.300 user 0m34.634s 00:06:18.300 sys 0m2.978s 00:06:18.300 11:55:07 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.301 11:55:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.301 ************************************ 00:06:18.301 END TEST app_repeat 00:06:18.301 ************************************ 00:06:18.301 11:55:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.301 11:55:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.301 11:55:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:18.301 11:55:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.301 11:55:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.301 ************************************ 00:06:18.301 START TEST cpu_locks 00:06:18.301 ************************************ 00:06:18.301 11:55:07 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:18.301 * Looking for test storage... 00:06:18.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:18.301 11:55:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.301 11:55:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.301 11:55:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.301 11:55:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.301 11:55:07 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:18.301 11:55:07 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.301 11:55:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.301 ************************************ 00:06:18.301 START TEST default_locks 00:06:18.301 ************************************ 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2037064 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2037064 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2037064 ']' 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.301 11:55:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.301 [2024-06-10 11:55:07.788036] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:18.301 [2024-06-10 11:55:07.788082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037064 ] 00:06:18.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.559 [2024-06-10 11:55:07.857738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.559 [2024-06-10 11:55:07.926960] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.124 11:55:08 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.124 11:55:08 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:19.124 11:55:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2037064 00:06:19.124 11:55:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2037064 00:06:19.124 11:55:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.689 lslocks: write error 00:06:19.689 11:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2037064 00:06:19.689 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 2037064 ']' 00:06:19.689 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 2037064 00:06:19.689 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:19.689 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:19.689 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2037064 00:06:19.948 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:19.948 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:19.948 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2037064' 00:06:19.948 killing process with pid 2037064 00:06:19.948 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 2037064 00:06:19.948 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 2037064 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2037064 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2037064 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2037064 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2037064 ']' 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2037064) - No such process 00:06:20.207 ERROR: process (pid: 2037064) is no longer running 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.207 00:06:20.207 real 0m1.836s 00:06:20.207 user 0m1.936s 00:06:20.207 sys 0m0.677s 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.207 11:55:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.207 ************************************ 00:06:20.207 END TEST default_locks 00:06:20.207 ************************************ 00:06:20.207 11:55:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:20.207 11:55:09 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:20.207 11:55:09 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.207 11:55:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.207 ************************************ 00:06:20.207 START TEST default_locks_via_rpc 00:06:20.207 ************************************ 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2037451 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2037451 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2037451 ']' 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:20.207 11:55:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.207 [2024-06-10 11:55:09.702615] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:20.207 [2024-06-10 11:55:09.702673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037451 ] 00:06:20.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.466 [2024-06-10 11:55:09.771657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.466 [2024-06-10 11:55:09.844591] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2037451 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2037451 00:06:21.032 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2037451 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 2037451 ']' 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 2037451 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2037451 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2037451' 00:06:21.599 killing process with pid 2037451 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 2037451 00:06:21.599 11:55:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 2037451 00:06:21.875 00:06:21.875 real 0m1.554s 00:06:21.875 user 0m1.611s 00:06:21.875 sys 0m0.529s 00:06:21.875 11:55:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.875 11:55:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 ************************************ 00:06:21.875 END TEST default_locks_via_rpc 00:06:21.875 ************************************ 00:06:21.875 11:55:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:21.875 11:55:11 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:21.875 11:55:11 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.875 11:55:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 ************************************ 00:06:21.875 START TEST non_locking_app_on_locked_coremask 00:06:21.875 ************************************ 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2037785 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2037785 /var/tmp/spdk.sock 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2037785 ']' 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.875 11:55:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 [2024-06-10 11:55:11.334288] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:21.875 [2024-06-10 11:55:11.334337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037785 ] 00:06:21.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.133 [2024-06-10 11:55:11.405216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.133 [2024-06-10 11:55:11.479596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2037926 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2037926 /var/tmp/spdk2.sock 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2037926 ']' 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:22.701 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.701 [2024-06-10 11:55:12.180111] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:22.701 [2024-06-10 11:55:12.180165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037926 ] 00:06:22.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.960 [2024-06-10 11:55:12.276026] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.960 [2024-06-10 11:55:12.276048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.960 [2024-06-10 11:55:12.411787] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.526 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:23.526 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:23.526 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2037785 00:06:23.526 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2037785 00:06:23.526 11:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.902 lslocks: write error 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2037785 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2037785 ']' 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2037785 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2037785 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2037785' 00:06:24.902 killing process with pid 2037785 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2037785 00:06:24.902 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2037785 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2037926 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2037926 ']' 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2037926 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2037926 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2037926' 00:06:25.470 killing process with pid 2037926 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2037926 00:06:25.470 11:55:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2037926 00:06:25.730 00:06:25.730 real 0m3.935s 00:06:25.730 user 0m4.189s 00:06:25.730 sys 0m1.410s 00:06:25.730 11:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.730 11:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.730 ************************************ 00:06:25.730 END TEST non_locking_app_on_locked_coremask 00:06:25.730 ************************************ 00:06:25.989 11:55:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.989 11:55:15 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:25.989 11:55:15 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.989 11:55:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.989 ************************************ 00:06:25.989 START TEST locking_app_on_unlocked_coremask 00:06:25.989 ************************************ 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2038497 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2038497 /var/tmp/spdk.sock 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2038497 ']' 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:25.989 11:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.989 [2024-06-10 11:55:15.356209] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:25.989 [2024-06-10 11:55:15.356252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038497 ] 00:06:25.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.989 [2024-06-10 11:55:15.425078] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.989 [2024-06-10 11:55:15.425103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.989 [2024-06-10 11:55:15.498940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2038745 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2038745 /var/tmp/spdk2.sock 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2038745 ']' 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:26.926 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 [2024-06-10 11:55:16.197559] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:26.926 [2024-06-10 11:55:16.197610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038745 ] 00:06:26.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.926 [2024-06-10 11:55:16.292166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.926 [2024-06-10 11:55:16.429003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.494 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:27.494 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:27.494 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2038745 00:06:27.494 11:55:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2038745 00:06:27.494 11:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.870 lslocks: write error 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2038497 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2038497 ']' 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2038497 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2038497 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2038497' 00:06:28.870 killing process with pid 2038497 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2038497 00:06:28.870 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2038497 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2038745 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2038745 ']' 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2038745 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2038745 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2038745' 00:06:29.438 killing process with pid 2038745 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2038745 00:06:29.438 11:55:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2038745 00:06:29.703 00:06:29.703 real 0m3.917s 00:06:29.703 user 0m4.145s 00:06:29.703 sys 0m1.363s 00:06:29.703 11:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.703 11:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.703 ************************************ 00:06:29.703 END TEST locking_app_on_unlocked_coremask 00:06:29.703 ************************************ 00:06:30.010 11:55:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:30.010 11:55:19 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.010 11:55:19 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.010 11:55:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.010 ************************************ 00:06:30.010 START TEST locking_app_on_locked_coremask 00:06:30.010 ************************************ 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2039326 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2039326 /var/tmp/spdk.sock 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2039326 ']' 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:30.010 11:55:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.010 [2024-06-10 11:55:19.350912] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:30.010 [2024-06-10 11:55:19.350958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039326 ] 00:06:30.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.010 [2024-06-10 11:55:19.421306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.010 [2024-06-10 11:55:19.490368] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2039343 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2039343 /var/tmp/spdk2.sock 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2039343 /var/tmp/spdk2.sock 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2039343 /var/tmp/spdk2.sock 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2039343 ']' 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:30.951 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.951 [2024-06-10 11:55:20.190596] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:30.951 [2024-06-10 11:55:20.190643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039343 ] 00:06:30.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.951 [2024-06-10 11:55:20.289373] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2039326 has claimed it. 00:06:30.951 [2024-06-10 11:55:20.289418] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2039343) - No such process 00:06:31.519 ERROR: process (pid: 2039343) is no longer running 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2039326 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2039326 00:06:31.519 11:55:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.087 lslocks: write error 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2039326 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2039326 ']' 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2039326 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2039326 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2039326' 00:06:32.087 killing process with pid 2039326 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2039326 00:06:32.087 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2039326 00:06:32.346 00:06:32.346 real 0m2.506s 00:06:32.346 user 0m2.732s 00:06:32.346 sys 0m0.838s 00:06:32.346 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.346 11:55:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.346 ************************************ 00:06:32.346 END TEST locking_app_on_locked_coremask 00:06:32.346 ************************************ 00:06:32.346 11:55:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:32.346 11:55:21 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:32.346 11:55:21 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.346 11:55:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.605 ************************************ 00:06:32.605 START TEST locking_overlapped_coremask 00:06:32.605 ************************************ 00:06:32.605 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:32.605 11:55:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2039662 00:06:32.605 11:55:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2039662 /var/tmp/spdk.sock 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2039662 ']' 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:32.606 11:55:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.606 [2024-06-10 11:55:21.935614] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:32.606 [2024-06-10 11:55:21.935661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039662 ] 00:06:32.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.606 [2024-06-10 11:55:22.005985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.606 [2024-06-10 11:55:22.077527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.606 [2024-06-10 11:55:22.077624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.606 [2024-06-10 11:55:22.077626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2039909 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2039909 /var/tmp/spdk2.sock 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2039909 /var/tmp/spdk2.sock 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2039909 /var/tmp/spdk2.sock 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2039909 ']' 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:33.541 11:55:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.541 [2024-06-10 11:55:22.771370] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:33.541 [2024-06-10 11:55:22.771419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039909 ] 00:06:33.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.541 [2024-06-10 11:55:22.872277] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2039662 has claimed it. 00:06:33.541 [2024-06-10 11:55:22.872320] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2039909) - No such process 00:06:34.108 ERROR: process (pid: 2039909) is no longer running 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.108 11:55:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2039662 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 2039662 ']' 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 2039662 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2039662 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2039662' 00:06:34.109 killing process with pid 2039662 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 2039662 00:06:34.109 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 2039662 00:06:34.367 00:06:34.367 real 0m1.894s 00:06:34.367 user 0m5.271s 00:06:34.367 sys 0m0.471s 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.367 ************************************ 00:06:34.367 END TEST locking_overlapped_coremask 00:06:34.367 ************************************ 00:06:34.367 11:55:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:34.367 11:55:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:34.367 11:55:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.367 11:55:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.367 ************************************ 00:06:34.367 START TEST locking_overlapped_coremask_via_rpc 00:06:34.367 ************************************ 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2040181 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2040181 /var/tmp/spdk.sock 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2040181 ']' 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:34.367 11:55:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.625 [2024-06-10 11:55:23.909547] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:34.625 [2024-06-10 11:55:23.909593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040181 ] 00:06:34.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.626 [2024-06-10 11:55:23.977923] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.626 [2024-06-10 11:55:23.977948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.626 [2024-06-10 11:55:24.048719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.626 [2024-06-10 11:55:24.048815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.626 [2024-06-10 11:55:24.048815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.191 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:35.191 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2040221 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2040221 /var/tmp/spdk2.sock 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2040221 ']' 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:35.192 11:55:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.450 [2024-06-10 11:55:24.737807] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:35.451 [2024-06-10 11:55:24.737857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040221 ] 00:06:35.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.451 [2024-06-10 11:55:24.836208] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.451 [2024-06-10 11:55:24.836239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.709 [2024-06-10 11:55:24.979721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.709 [2024-06-10 11:55:24.979835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.709 [2024-06-10 11:55:24.979835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.278 [2024-06-10 11:55:25.570549] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2040181 has claimed it. 00:06:36.278 request: 00:06:36.278 { 00:06:36.278 "method": "framework_enable_cpumask_locks", 00:06:36.278 "req_id": 1 00:06:36.278 } 00:06:36.278 Got JSON-RPC error response 00:06:36.278 response: 00:06:36.278 { 00:06:36.278 "code": -32603, 00:06:36.278 "message": "Failed to claim CPU core: 2" 00:06:36.278 } 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2040181 /var/tmp/spdk.sock 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2040181 ']' 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2040221 /var/tmp/spdk2.sock 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2040221 ']' 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:36.278 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.537 00:06:36.537 real 0m2.091s 00:06:36.537 user 0m0.790s 00:06:36.537 sys 0m0.222s 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.537 11:55:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.537 ************************************ 00:06:36.537 END TEST locking_overlapped_coremask_via_rpc 00:06:36.537 ************************************ 00:06:36.537 11:55:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:36.537 11:55:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2040181 ]] 00:06:36.537 11:55:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2040181 00:06:36.537 11:55:25 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2040181 ']' 00:06:36.537 11:55:25 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2040181 00:06:36.537 11:55:25 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:36.537 11:55:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:36.537 11:55:25 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2040181 00:06:36.537 11:55:26 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:36.537 11:55:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:36.537 11:55:26 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2040181' 00:06:36.537 killing process with pid 2040181 00:06:36.537 11:55:26 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2040181 00:06:36.537 11:55:26 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2040181 00:06:37.106 11:55:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2040221 ]] 00:06:37.106 11:55:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2040221 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2040221 ']' 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2040221 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2040221 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2040221' 00:06:37.106 killing process with pid 2040221 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2040221 00:06:37.106 11:55:26 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2040221 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2040181 ]] 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2040181 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2040181 ']' 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2040181 00:06:37.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2040181) - No such process 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2040181 is not found' 00:06:37.365 Process with pid 2040181 is not found 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2040221 ]] 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2040221 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2040221 ']' 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2040221 00:06:37.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2040221) - No such process 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2040221 is not found' 00:06:37.365 Process with pid 2040221 is not found 00:06:37.365 11:55:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.365 00:06:37.365 real 0m19.147s 00:06:37.365 user 0m31.227s 00:06:37.365 sys 0m6.585s 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.365 11:55:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.365 ************************************ 00:06:37.365 END TEST cpu_locks 00:06:37.365 ************************************ 00:06:37.365 00:06:37.365 real 0m44.824s 00:06:37.365 user 1m22.946s 00:06:37.365 sys 0m10.681s 00:06:37.365 11:55:26 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.365 11:55:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.365 ************************************ 00:06:37.365 END TEST event 00:06:37.365 ************************************ 00:06:37.365 11:55:26 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:37.365 11:55:26 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:37.365 11:55:26 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.365 11:55:26 -- common/autotest_common.sh@10 -- # set +x 00:06:37.365 ************************************ 00:06:37.365 START TEST thread 00:06:37.365 ************************************ 00:06:37.365 11:55:26 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:37.625 * Looking for test storage... 00:06:37.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:37.625 11:55:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.625 11:55:26 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:37.625 11:55:26 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.625 11:55:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.625 ************************************ 00:06:37.625 START TEST thread_poller_perf 00:06:37.625 ************************************ 00:06:37.625 11:55:27 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.625 [2024-06-10 11:55:27.028771] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:37.625 [2024-06-10 11:55:27.028853] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040840 ] 00:06:37.625 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.625 [2024-06-10 11:55:27.099949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.884 [2024-06-10 11:55:27.169243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.884 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:38.820 ====================================== 00:06:38.820 busy:2506531378 (cyc) 00:06:38.820 total_run_count: 434000 00:06:38.820 tsc_hz: 2500000000 (cyc) 00:06:38.820 ====================================== 00:06:38.820 poller_cost: 5775 (cyc), 2310 (nsec) 00:06:38.820 00:06:38.820 real 0m1.233s 00:06:38.820 user 0m1.139s 00:06:38.820 sys 0m0.090s 00:06:38.820 11:55:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.820 11:55:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.820 ************************************ 00:06:38.820 END TEST thread_poller_perf 00:06:38.820 ************************************ 00:06:38.820 11:55:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.820 11:55:28 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:38.820 11:55:28 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.820 11:55:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.820 ************************************ 00:06:38.820 START TEST thread_poller_perf 00:06:38.820 ************************************ 00:06:38.820 11:55:28 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:38.820 [2024-06-10 11:55:28.324887] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:38.820 [2024-06-10 11:55:28.324935] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041008 ] 00:06:39.079 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.079 [2024-06-10 11:55:28.393439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.079 [2024-06-10 11:55:28.462854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.079 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.015 ====================================== 00:06:40.015 busy:2501715288 (cyc) 00:06:40.015 total_run_count: 5726000 00:06:40.015 tsc_hz: 2500000000 (cyc) 00:06:40.015 ====================================== 00:06:40.015 poller_cost: 436 (cyc), 174 (nsec) 00:06:40.015 00:06:40.015 real 0m1.218s 00:06:40.015 user 0m1.139s 00:06:40.015 sys 0m0.076s 00:06:40.015 11:55:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.015 11:55:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.015 ************************************ 00:06:40.015 END TEST thread_poller_perf 00:06:40.015 ************************************ 00:06:40.274 11:55:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.274 00:06:40.274 real 0m2.707s 00:06:40.274 user 0m2.386s 00:06:40.274 sys 0m0.330s 00:06:40.274 11:55:29 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.274 11:55:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.274 ************************************ 00:06:40.275 END TEST thread 00:06:40.275 ************************************ 00:06:40.275 11:55:29 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:40.275 11:55:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.275 11:55:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.275 11:55:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.275 ************************************ 00:06:40.275 START TEST accel 00:06:40.275 ************************************ 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:40.275 * Looking for test storage... 00:06:40.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.275 11:55:29 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:40.275 11:55:29 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:40.275 11:55:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.275 11:55:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2041297 00:06:40.275 11:55:29 accel -- accel/accel.sh@63 -- # waitforlisten 2041297 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@830 -- # '[' -z 2041297 ']' 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:40.275 11:55:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.275 11:55:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:40.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:40.275 11:55:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.275 11:55:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.275 11:55:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.275 11:55:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.275 11:55:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.275 11:55:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.275 11:55:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:40.275 11:55:29 accel -- accel/accel.sh@41 -- # jq -r . 00:06:40.275 [2024-06-10 11:55:29.792881] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:40.275 [2024-06-10 11:55:29.792934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041297 ] 00:06:40.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.534 [2024-06-10 11:55:29.863060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.534 [2024-06-10 11:55:29.932222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.103 11:55:30 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:41.103 11:55:30 accel -- common/autotest_common.sh@863 -- # return 0 00:06:41.103 11:55:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:41.103 11:55:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:41.103 11:55:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:41.103 11:55:30 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:41.103 11:55:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:41.103 11:55:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:41.103 11:55:30 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:41.103 11:55:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:41.103 11:55:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.103 11:55:30 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:41.103 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.103 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.103 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.103 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.103 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.103 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.103 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.103 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.362 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.363 11:55:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.363 11:55:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.363 11:55:30 accel -- accel/accel.sh@75 -- # killprocess 2041297 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@949 -- # '[' -z 2041297 ']' 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@953 -- # kill -0 2041297 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@954 -- # uname 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2041297 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2041297' 00:06:41.363 killing process with pid 2041297 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@968 -- # kill 2041297 00:06:41.363 11:55:30 accel -- common/autotest_common.sh@973 -- # wait 2041297 00:06:41.623 11:55:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:41.623 11:55:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:41.623 11:55:30 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:41.623 11:55:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.623 11:55:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.623 11:55:31 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:41.623 11:55:31 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:41.623 11:55:31 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.623 11:55:31 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:41.623 11:55:31 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:41.623 11:55:31 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:41.623 11:55:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.623 11:55:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.623 ************************************ 00:06:41.623 START TEST accel_missing_filename 00:06:41.623 ************************************ 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:41.623 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:41.623 11:55:31 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:41.883 [2024-06-10 11:55:31.148371] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:41.883 [2024-06-10 11:55:31.148430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041524 ] 00:06:41.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.883 [2024-06-10 11:55:31.219268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.883 [2024-06-10 11:55:31.288696] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.883 [2024-06-10 11:55:31.329933] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.883 [2024-06-10 11:55:31.390161] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:42.142 A filename is required. 00:06:42.142 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:42.143 00:06:42.143 real 0m0.343s 00:06:42.143 user 0m0.254s 00:06:42.143 sys 0m0.127s 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.143 11:55:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:42.143 ************************************ 00:06:42.143 END TEST accel_missing_filename 00:06:42.143 ************************************ 00:06:42.143 11:55:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.143 11:55:31 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:42.143 11:55:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.143 11:55:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.143 ************************************ 00:06:42.143 START TEST accel_compress_verify 00:06:42.143 ************************************ 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:42.143 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:42.143 11:55:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:42.143 [2024-06-10 11:55:31.554617] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:42.143 [2024-06-10 11:55:31.554675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041734 ] 00:06:42.143 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.143 [2024-06-10 11:55:31.625130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.403 [2024-06-10 11:55:31.693535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.403 [2024-06-10 11:55:31.734172] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.403 [2024-06-10 11:55:31.793741] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:42.403 00:06:42.403 Compression does not support the verify option, aborting. 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:42.403 00:06:42.403 real 0m0.338s 00:06:42.403 user 0m0.239s 00:06:42.403 sys 0m0.136s 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.403 11:55:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.403 ************************************ 00:06:42.403 END TEST accel_compress_verify 00:06:42.403 ************************************ 00:06:42.403 11:55:31 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:42.403 11:55:31 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:42.403 11:55:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.403 11:55:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.663 ************************************ 00:06:42.663 START TEST accel_wrong_workload 00:06:42.663 ************************************ 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:42.663 11:55:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:42.663 Unsupported workload type: foobar 00:06:42.663 [2024-06-10 11:55:31.967670] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:42.663 accel_perf options: 00:06:42.663 [-h help message] 00:06:42.663 [-q queue depth per core] 00:06:42.663 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:42.663 [-T number of threads per core 00:06:42.663 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:42.663 [-t time in seconds] 00:06:42.663 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:42.663 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:42.663 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:42.663 [-l for compress/decompress workloads, name of uncompressed input file 00:06:42.663 [-S for crc32c workload, use this seed value (default 0) 00:06:42.663 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:42.663 [-f for fill workload, use this BYTE value (default 255) 00:06:42.663 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:42.663 [-y verify result if this switch is on] 00:06:42.663 [-a tasks to allocate per core (default: same value as -q)] 00:06:42.663 Can be used to spread operations across a wider range of memory. 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:42.663 00:06:42.663 real 0m0.035s 00:06:42.663 user 0m0.021s 00:06:42.663 sys 0m0.014s 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.663 11:55:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:42.663 ************************************ 00:06:42.663 END TEST accel_wrong_workload 00:06:42.663 ************************************ 00:06:42.663 Error: writing output failed: Broken pipe 00:06:42.663 11:55:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:42.663 11:55:32 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:42.663 11:55:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.663 11:55:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.663 ************************************ 00:06:42.663 START TEST accel_negative_buffers 00:06:42.663 ************************************ 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:42.663 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:42.663 11:55:32 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:42.663 -x option must be non-negative. 00:06:42.663 [2024-06-10 11:55:32.066124] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:42.663 accel_perf options: 00:06:42.663 [-h help message] 00:06:42.663 [-q queue depth per core] 00:06:42.663 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:42.663 [-T number of threads per core 00:06:42.663 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:42.663 [-t time in seconds] 00:06:42.663 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:42.663 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:42.663 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:42.663 [-l for compress/decompress workloads, name of uncompressed input file 00:06:42.663 [-S for crc32c workload, use this seed value (default 0) 00:06:42.663 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:42.663 [-f for fill workload, use this BYTE value (default 255) 00:06:42.663 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:42.664 [-y verify result if this switch is on] 00:06:42.664 [-a tasks to allocate per core (default: same value as -q)] 00:06:42.664 Can be used to spread operations across a wider range of memory. 00:06:42.664 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:42.664 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:42.664 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:42.664 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:42.664 00:06:42.664 real 0m0.036s 00:06:42.664 user 0m0.019s 00:06:42.664 sys 0m0.017s 00:06:42.664 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.664 11:55:32 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:42.664 ************************************ 00:06:42.664 END TEST accel_negative_buffers 00:06:42.664 ************************************ 00:06:42.664 Error: writing output failed: Broken pipe 00:06:42.664 11:55:32 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:42.664 11:55:32 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:42.664 11:55:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.664 11:55:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.664 ************************************ 00:06:42.664 START TEST accel_crc32c 00:06:42.664 ************************************ 00:06:42.664 11:55:32 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:42.664 11:55:32 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:42.664 [2024-06-10 11:55:32.177109] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:42.664 [2024-06-10 11:55:32.177181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041848 ] 00:06:42.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.923 [2024-06-10 11:55:32.248150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.923 [2024-06-10 11:55:32.315731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.923 11:55:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:44.304 11:55:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.304 00:06:44.304 real 0m1.343s 00:06:44.304 user 0m1.220s 00:06:44.304 sys 0m0.138s 00:06:44.304 11:55:33 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.304 11:55:33 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:44.304 ************************************ 00:06:44.304 END TEST accel_crc32c 00:06:44.304 ************************************ 00:06:44.304 11:55:33 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:44.304 11:55:33 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:44.304 11:55:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.304 11:55:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.304 ************************************ 00:06:44.304 START TEST accel_crc32c_C2 00:06:44.304 ************************************ 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:44.304 [2024-06-10 11:55:33.597084] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:44.304 [2024-06-10 11:55:33.597139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042129 ] 00:06:44.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.304 [2024-06-10 11:55:33.665953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.304 [2024-06-10 11:55:33.733368] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.304 11:55:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.689 00:06:45.689 real 0m1.337s 00:06:45.689 user 0m1.223s 00:06:45.689 sys 0m0.128s 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.689 11:55:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:45.689 ************************************ 00:06:45.689 END TEST accel_crc32c_C2 00:06:45.689 ************************************ 00:06:45.689 11:55:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:45.689 11:55:34 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:45.689 11:55:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.689 11:55:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.689 ************************************ 00:06:45.689 START TEST accel_copy 00:06:45.689 ************************************ 00:06:45.689 11:55:34 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:45.689 11:55:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:45.689 [2024-06-10 11:55:35.018165] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:45.689 [2024-06-10 11:55:35.018222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042415 ] 00:06:45.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.689 [2024-06-10 11:55:35.089246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.689 [2024-06-10 11:55:35.157206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.690 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.949 11:55:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:46.887 11:55:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.887 00:06:46.887 real 0m1.346s 00:06:46.887 user 0m1.223s 00:06:46.887 sys 0m0.136s 00:06:46.887 11:55:36 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.887 11:55:36 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.887 ************************************ 00:06:46.887 END TEST accel_copy 00:06:46.887 ************************************ 00:06:46.887 11:55:36 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.887 11:55:36 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:46.887 11:55:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.887 11:55:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.887 ************************************ 00:06:46.887 START TEST accel_fill 00:06:46.887 ************************************ 00:06:46.887 11:55:36 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:46.887 11:55:36 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:47.147 [2024-06-10 11:55:36.424872] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:47.147 [2024-06-10 11:55:36.424927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042696 ] 00:06:47.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.147 [2024-06-10 11:55:36.496141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.147 [2024-06-10 11:55:36.564410] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.147 11:55:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:48.526 11:55:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.526 00:06:48.526 real 0m1.346s 00:06:48.526 user 0m1.228s 00:06:48.526 sys 0m0.131s 00:06:48.526 11:55:37 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.526 11:55:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:48.526 ************************************ 00:06:48.526 END TEST accel_fill 00:06:48.526 ************************************ 00:06:48.526 11:55:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:48.526 11:55:37 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:48.526 11:55:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.526 11:55:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.527 ************************************ 00:06:48.527 START TEST accel_copy_crc32c 00:06:48.527 ************************************ 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.527 11:55:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.527 [2024-06-10 11:55:37.833178] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:48.527 [2024-06-10 11:55:37.833235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042980 ] 00:06:48.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.527 [2024-06-10 11:55:37.901726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.527 [2024-06-10 11:55:37.969322] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.527 11:55:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.906 00:06:49.906 real 0m1.341s 00:06:49.906 user 0m1.227s 00:06:49.906 sys 0m0.128s 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.906 11:55:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:49.906 ************************************ 00:06:49.906 END TEST accel_copy_crc32c 00:06:49.906 ************************************ 00:06:49.906 11:55:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.906 11:55:39 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:49.906 11:55:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.906 11:55:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.906 ************************************ 00:06:49.906 START TEST accel_copy_crc32c_C2 00:06:49.906 ************************************ 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.906 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:49.906 [2024-06-10 11:55:39.254036] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:49.906 [2024-06-10 11:55:39.254101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043220 ] 00:06:49.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.906 [2024-06-10 11:55:39.326294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.906 [2024-06-10 11:55:39.395291] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.164 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.165 11:55:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.109 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.110 00:06:51.110 real 0m1.350s 00:06:51.110 user 0m1.224s 00:06:51.110 sys 0m0.139s 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.110 11:55:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:51.110 ************************************ 00:06:51.110 END TEST accel_copy_crc32c_C2 00:06:51.110 ************************************ 00:06:51.110 11:55:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:51.110 11:55:40 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:51.110 11:55:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.110 11:55:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.369 ************************************ 00:06:51.369 START TEST accel_dualcast 00:06:51.369 ************************************ 00:06:51.369 11:55:40 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:51.369 [2024-06-10 11:55:40.658356] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:51.369 [2024-06-10 11:55:40.658407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043441 ] 00:06:51.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.369 [2024-06-10 11:55:40.727631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.369 [2024-06-10 11:55:40.796622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.369 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:51.370 11:55:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:52.845 11:55:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.845 00:06:52.845 real 0m1.334s 00:06:52.845 user 0m1.222s 00:06:52.845 sys 0m0.125s 00:06:52.845 11:55:41 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.845 11:55:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 ************************************ 00:06:52.845 END TEST accel_dualcast 00:06:52.845 ************************************ 00:06:52.845 11:55:42 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:52.845 11:55:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:52.845 11:55:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.845 11:55:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.845 ************************************ 00:06:52.845 START TEST accel_compare 00:06:52.845 ************************************ 00:06:52.845 11:55:42 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:52.845 [2024-06-10 11:55:42.064717] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:52.845 [2024-06-10 11:55:42.064793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043652 ] 00:06:52.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.845 [2024-06-10 11:55:42.136037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.845 [2024-06-10 11:55:42.207634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:52.845 11:55:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:54.221 11:55:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.221 00:06:54.221 real 0m1.350s 00:06:54.221 user 0m1.228s 00:06:54.221 sys 0m0.135s 00:06:54.221 11:55:43 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.221 11:55:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:54.221 ************************************ 00:06:54.221 END TEST accel_compare 00:06:54.221 ************************************ 00:06:54.221 11:55:43 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:54.221 11:55:43 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:54.221 11:55:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.222 11:55:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.222 ************************************ 00:06:54.222 START TEST accel_xor 00:06:54.222 ************************************ 00:06:54.222 11:55:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:54.222 [2024-06-10 11:55:43.484775] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:54.222 [2024-06-10 11:55:43.484837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043892 ] 00:06:54.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.222 [2024-06-10 11:55:43.554131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.222 [2024-06-10 11:55:43.622809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.222 11:55:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.599 11:55:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.600 00:06:55.600 real 0m1.342s 00:06:55.600 user 0m1.219s 00:06:55.600 sys 0m0.136s 00:06:55.600 11:55:44 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.600 11:55:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:55.600 ************************************ 00:06:55.600 END TEST accel_xor 00:06:55.600 ************************************ 00:06:55.600 11:55:44 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:55.600 11:55:44 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:55.600 11:55:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:55.600 11:55:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.600 ************************************ 00:06:55.600 START TEST accel_xor 00:06:55.600 ************************************ 00:06:55.600 11:55:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:55.600 11:55:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:55.600 [2024-06-10 11:55:44.867216] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:55.600 [2024-06-10 11:55:44.867254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044158 ] 00:06:55.600 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.600 [2024-06-10 11:55:44.935572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.600 [2024-06-10 11:55:45.004756] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.600 11:55:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:56.980 11:55:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.980 00:06:56.980 real 0m1.323s 00:06:56.980 user 0m1.199s 00:06:56.980 sys 0m0.128s 00:06:56.980 11:55:46 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.980 11:55:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:56.980 ************************************ 00:06:56.980 END TEST accel_xor 00:06:56.980 ************************************ 00:06:56.980 11:55:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:56.980 11:55:46 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:56.980 11:55:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.980 11:55:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.980 ************************************ 00:06:56.980 START TEST accel_dif_verify 00:06:56.980 ************************************ 00:06:56.980 11:55:46 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:56.980 [2024-06-10 11:55:46.281910] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:56.980 [2024-06-10 11:55:46.281970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044443 ] 00:06:56.980 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.980 [2024-06-10 11:55:46.353943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.980 [2024-06-10 11:55:46.426795] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.980 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.981 11:55:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:58.358 11:55:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.358 00:06:58.358 real 0m1.347s 00:06:58.358 user 0m1.216s 00:06:58.358 sys 0m0.134s 00:06:58.358 11:55:47 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.358 11:55:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.358 ************************************ 00:06:58.358 END TEST accel_dif_verify 00:06:58.358 ************************************ 00:06:58.358 11:55:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:58.358 11:55:47 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:58.358 11:55:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.358 11:55:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.358 ************************************ 00:06:58.358 START TEST accel_dif_generate 00:06:58.358 ************************************ 00:06:58.358 11:55:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:58.358 [2024-06-10 11:55:47.700277] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:58.358 [2024-06-10 11:55:47.700331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044728 ] 00:06:58.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.358 [2024-06-10 11:55:47.768174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.358 [2024-06-10 11:55:47.835603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.358 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:58.618 11:55:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.555 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.556 11:55:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.556 11:55:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.556 11:55:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:59.556 11:55:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.556 00:06:59.556 real 0m1.333s 00:06:59.556 user 0m1.207s 00:06:59.556 sys 0m0.128s 00:06:59.556 11:55:49 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.556 11:55:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:59.556 ************************************ 00:06:59.556 END TEST accel_dif_generate 00:06:59.556 ************************************ 00:06:59.556 11:55:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:59.556 11:55:49 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:59.556 11:55:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.556 11:55:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.815 ************************************ 00:06:59.815 START TEST accel_dif_generate_copy 00:06:59.815 ************************************ 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:59.815 [2024-06-10 11:55:49.105063] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:06:59.815 [2024-06-10 11:55:49.105118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045012 ] 00:06:59.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.815 [2024-06-10 11:55:49.173417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.815 [2024-06-10 11:55:49.240995] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.815 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.816 11:55:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.194 00:07:01.194 real 0m1.333s 00:07:01.194 user 0m1.206s 00:07:01.194 sys 0m0.128s 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.194 11:55:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.194 ************************************ 00:07:01.194 END TEST accel_dif_generate_copy 00:07:01.194 ************************************ 00:07:01.194 11:55:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:01.194 11:55:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.194 11:55:50 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:01.194 11:55:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.194 11:55:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.194 ************************************ 00:07:01.194 START TEST accel_comp 00:07:01.194 ************************************ 00:07:01.194 11:55:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:01.194 [2024-06-10 11:55:50.496702] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:01.194 [2024-06-10 11:55:50.496775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045300 ] 00:07:01.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.194 [2024-06-10 11:55:50.568482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.194 [2024-06-10 11:55:50.638832] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.194 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:01.195 11:55:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.573 11:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:02.574 11:55:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.574 00:07:02.574 real 0m1.344s 00:07:02.574 user 0m1.221s 00:07:02.574 sys 0m0.126s 00:07:02.574 11:55:51 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.574 11:55:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 ************************************ 00:07:02.574 END TEST accel_comp 00:07:02.574 ************************************ 00:07:02.574 11:55:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.574 11:55:51 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:02.574 11:55:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.574 11:55:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 ************************************ 00:07:02.574 START TEST accel_decomp 00:07:02.574 ************************************ 00:07:02.574 11:55:51 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:02.574 11:55:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:02.574 [2024-06-10 11:55:51.908647] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:02.574 [2024-06-10 11:55:51.908706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045580 ] 00:07:02.574 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.574 [2024-06-10 11:55:51.977863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.574 [2024-06-10 11:55:52.045527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.574 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.832 11:55:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.790 11:55:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.790 00:07:03.790 real 0m1.338s 00:07:03.790 user 0m1.219s 00:07:03.790 sys 0m0.122s 00:07:03.791 11:55:53 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.791 11:55:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:03.791 ************************************ 00:07:03.791 END TEST accel_decomp 00:07:03.791 ************************************ 00:07:03.791 11:55:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.791 11:55:53 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:03.791 11:55:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.791 11:55:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.791 ************************************ 00:07:03.791 START TEST accel_decomp_full 00:07:03.791 ************************************ 00:07:03.791 11:55:53 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:03.791 11:55:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:04.050 [2024-06-10 11:55:53.316795] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:04.050 [2024-06-10 11:55:53.316849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045860 ] 00:07:04.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.050 [2024-06-10 11:55:53.385214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.050 [2024-06-10 11:55:53.454340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.050 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:04.051 11:55:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.430 11:55:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.431 00:07:05.431 real 0m1.347s 00:07:05.431 user 0m1.222s 00:07:05.431 sys 0m0.127s 00:07:05.431 11:55:54 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.431 11:55:54 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:05.431 ************************************ 00:07:05.431 END TEST accel_decomp_full 00:07:05.431 ************************************ 00:07:05.431 11:55:54 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.431 11:55:54 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:05.431 11:55:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.431 11:55:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.431 ************************************ 00:07:05.431 START TEST accel_decomp_mcore 00:07:05.431 ************************************ 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:05.431 [2024-06-10 11:55:54.736931] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:05.431 [2024-06-10 11:55:54.736996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046108 ] 00:07:05.431 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.431 [2024-06-10 11:55:54.808689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.431 [2024-06-10 11:55:54.880226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.431 [2024-06-10 11:55:54.880321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.431 [2024-06-10 11:55:54.880405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.431 [2024-06-10 11:55:54.880407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.431 11:55:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.811 00:07:06.811 real 0m1.360s 00:07:06.811 user 0m4.568s 00:07:06.811 sys 0m0.138s 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:06.811 11:55:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:06.811 ************************************ 00:07:06.811 END TEST accel_decomp_mcore 00:07:06.811 ************************************ 00:07:06.811 11:55:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.811 11:55:56 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:06.811 11:55:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:06.811 11:55:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.811 ************************************ 00:07:06.811 START TEST accel_decomp_full_mcore 00:07:06.811 ************************************ 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:06.811 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:06.811 [2024-06-10 11:55:56.179613] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:06.811 [2024-06-10 11:55:56.179672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046336 ] 00:07:06.811 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.811 [2024-06-10 11:55:56.251341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.811 [2024-06-10 11:55:56.324505] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.811 [2024-06-10 11:55:56.324559] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.811 [2024-06-10 11:55:56.324661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.811 [2024-06-10 11:55:56.324663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.071 11:55:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.009 00:07:08.009 real 0m1.374s 00:07:08.009 user 0m4.599s 00:07:08.009 sys 0m0.146s 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.009 11:55:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:08.009 ************************************ 00:07:08.009 END TEST accel_decomp_full_mcore 00:07:08.009 ************************************ 00:07:08.268 11:55:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.268 11:55:57 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:08.268 11:55:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:08.268 11:55:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.268 ************************************ 00:07:08.268 START TEST accel_decomp_mthread 00:07:08.268 ************************************ 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:08.268 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:08.268 [2024-06-10 11:55:57.628952] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:08.268 [2024-06-10 11:55:57.629009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046577 ] 00:07:08.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.268 [2024-06-10 11:55:57.699260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.268 [2024-06-10 11:55:57.767973] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.528 11:55:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.465 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.466 00:07:09.466 real 0m1.351s 00:07:09.466 user 0m1.234s 00:07:09.466 sys 0m0.134s 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:09.466 11:55:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.466 ************************************ 00:07:09.466 END TEST accel_decomp_mthread 00:07:09.466 ************************************ 00:07:09.725 11:55:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.725 11:55:58 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:09.725 11:55:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:09.726 11:55:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.726 ************************************ 00:07:09.726 START TEST accel_decomp_full_mthread 00:07:09.726 ************************************ 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:09.726 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:09.726 [2024-06-10 11:55:59.062107] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:09.726 [2024-06-10 11:55:59.062165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046789 ] 00:07:09.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.726 [2024-06-10 11:55:59.133755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.726 [2024-06-10 11:55:59.203811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.986 11:55:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.924 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.925 00:07:10.925 real 0m1.377s 00:07:10.925 user 0m1.248s 00:07:10.925 sys 0m0.144s 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:10.925 11:56:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:10.925 ************************************ 00:07:10.925 END TEST accel_decomp_full_mthread 00:07:10.925 ************************************ 00:07:11.185 11:56:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:11.185 11:56:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.185 11:56:00 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:11.185 11:56:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:11.185 11:56:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.185 11:56:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.185 11:56:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.185 11:56:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.185 11:56:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.185 11:56:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.185 11:56:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.185 11:56:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:11.185 11:56:00 accel -- accel/accel.sh@41 -- # jq -r . 00:07:11.185 ************************************ 00:07:11.185 START TEST accel_dif_functional_tests 00:07:11.185 ************************************ 00:07:11.185 11:56:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.185 [2024-06-10 11:56:00.529538] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:11.185 [2024-06-10 11:56:00.529584] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047055 ] 00:07:11.185 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.185 [2024-06-10 11:56:00.598038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.185 [2024-06-10 11:56:00.670641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.185 [2024-06-10 11:56:00.670735] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.185 [2024-06-10 11:56:00.670735] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.444 00:07:11.444 00:07:11.444 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.444 http://cunit.sourceforge.net/ 00:07:11.444 00:07:11.444 00:07:11.444 Suite: accel_dif 00:07:11.444 Test: verify: DIF generated, GUARD check ...passed 00:07:11.444 Test: verify: DIF generated, APPTAG check ...passed 00:07:11.444 Test: verify: DIF generated, REFTAG check ...passed 00:07:11.444 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:56:00.738931] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.444 passed 00:07:11.444 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:56:00.738982] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.444 passed 00:07:11.444 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:56:00.739005] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.444 passed 00:07:11.444 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:11.444 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:56:00.739052] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:11.444 passed 00:07:11.444 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:11.444 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:11.444 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:11.444 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:56:00.739160] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:11.444 passed 00:07:11.444 Test: verify copy: DIF generated, GUARD check ...passed 00:07:11.444 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:11.444 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:11.444 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:56:00.739277] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:11.445 passed 00:07:11.445 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 11:56:00.739300] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:11.445 passed 00:07:11.445 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:56:00.739323] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:11.445 passed 00:07:11.445 Test: generate copy: DIF generated, GUARD check ...passed 00:07:11.445 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:11.445 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:11.445 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:11.445 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:11.445 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:11.445 Test: generate copy: iovecs-len validate ...[2024-06-10 11:56:00.739506] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:11.445 passed 00:07:11.445 Test: generate copy: buffer alignment validate ...passed 00:07:11.445 00:07:11.445 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.445 suites 1 1 n/a 0 0 00:07:11.445 tests 26 26 26 0 0 00:07:11.445 asserts 115 115 115 0 n/a 00:07:11.445 00:07:11.445 Elapsed time = 0.002 seconds 00:07:11.445 00:07:11.445 real 0m0.415s 00:07:11.445 user 0m0.609s 00:07:11.445 sys 0m0.158s 00:07:11.445 11:56:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.445 11:56:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:11.445 ************************************ 00:07:11.445 END TEST accel_dif_functional_tests 00:07:11.445 ************************************ 00:07:11.445 00:07:11.445 real 0m31.312s 00:07:11.445 user 0m34.470s 00:07:11.445 sys 0m4.852s 00:07:11.445 11:56:00 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.445 11:56:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.445 ************************************ 00:07:11.445 END TEST accel 00:07:11.445 ************************************ 00:07:11.703 11:56:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:11.703 11:56:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:11.703 11:56:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.703 11:56:00 -- common/autotest_common.sh@10 -- # set +x 00:07:11.703 ************************************ 00:07:11.703 START TEST accel_rpc 00:07:11.703 ************************************ 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:11.703 * Looking for test storage... 00:07:11.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:11.703 11:56:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.703 11:56:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2047362 00:07:11.703 11:56:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2047362 00:07:11.703 11:56:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 2047362 ']' 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:11.703 11:56:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.703 [2024-06-10 11:56:01.187542] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:11.703 [2024-06-10 11:56:01.187594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047362 ] 00:07:11.703 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.962 [2024-06-10 11:56:01.255906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.962 [2024-06-10 11:56:01.327662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.531 11:56:01 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:12.531 11:56:01 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:12.531 11:56:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:12.531 11:56:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:12.531 11:56:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:12.531 11:56:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:12.531 11:56:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:12.531 11:56:01 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:12.531 11:56:01 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:12.531 11:56:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.531 ************************************ 00:07:12.531 START TEST accel_assign_opcode 00:07:12.531 ************************************ 00:07:12.531 11:56:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:12.531 11:56:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:12.531 11:56:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.531 [2024-06-10 11:56:02.005723] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.531 [2024-06-10 11:56:02.013730] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:12.531 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:12.794 software 00:07:12.794 00:07:12.794 real 0m0.231s 00:07:12.794 user 0m0.038s 00:07:12.794 sys 0m0.011s 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:12.794 11:56:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:12.794 ************************************ 00:07:12.794 END TEST accel_assign_opcode 00:07:12.794 ************************************ 00:07:12.794 11:56:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2047362 00:07:12.794 11:56:02 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 2047362 ']' 00:07:12.794 11:56:02 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 2047362 00:07:12.794 11:56:02 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:12.794 11:56:02 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:12.794 11:56:02 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2047362 00:07:13.062 11:56:02 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:13.062 11:56:02 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:13.062 11:56:02 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2047362' 00:07:13.062 killing process with pid 2047362 00:07:13.062 11:56:02 accel_rpc -- common/autotest_common.sh@968 -- # kill 2047362 00:07:13.062 11:56:02 accel_rpc -- common/autotest_common.sh@973 -- # wait 2047362 00:07:13.321 00:07:13.321 real 0m1.601s 00:07:13.321 user 0m1.623s 00:07:13.321 sys 0m0.474s 00:07:13.321 11:56:02 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.321 11:56:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.321 ************************************ 00:07:13.321 END TEST accel_rpc 00:07:13.321 ************************************ 00:07:13.321 11:56:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.321 11:56:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:13.321 11:56:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.321 11:56:02 -- common/autotest_common.sh@10 -- # set +x 00:07:13.321 ************************************ 00:07:13.321 START TEST app_cmdline 00:07:13.321 ************************************ 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.321 * Looking for test storage... 00:07:13.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.321 11:56:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.321 11:56:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2047709 00:07:13.321 11:56:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2047709 00:07:13.321 11:56:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 2047709 ']' 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:13.321 11:56:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.581 [2024-06-10 11:56:02.868244] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:13.581 [2024-06-10 11:56:02.868293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047709 ] 00:07:13.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.581 [2024-06-10 11:56:02.936270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.581 [2024-06-10 11:56:03.006915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.151 11:56:03 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:14.151 11:56:03 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:14.151 11:56:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.410 { 00:07:14.410 "version": "SPDK v24.09-pre git sha1 0a5aebcde", 00:07:14.410 "fields": { 00:07:14.410 "major": 24, 00:07:14.410 "minor": 9, 00:07:14.410 "patch": 0, 00:07:14.410 "suffix": "-pre", 00:07:14.410 "commit": "0a5aebcde" 00:07:14.410 } 00:07:14.410 } 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.410 11:56:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.410 11:56:03 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.670 request: 00:07:14.670 { 00:07:14.670 "method": "env_dpdk_get_mem_stats", 00:07:14.670 "req_id": 1 00:07:14.670 } 00:07:14.670 Got JSON-RPC error response 00:07:14.670 response: 00:07:14.670 { 00:07:14.670 "code": -32601, 00:07:14.670 "message": "Method not found" 00:07:14.670 } 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:14.670 11:56:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2047709 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 2047709 ']' 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 2047709 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2047709 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2047709' 00:07:14.670 killing process with pid 2047709 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@968 -- # kill 2047709 00:07:14.670 11:56:04 app_cmdline -- common/autotest_common.sh@973 -- # wait 2047709 00:07:14.931 00:07:14.931 real 0m1.703s 00:07:14.931 user 0m1.975s 00:07:14.931 sys 0m0.495s 00:07:14.931 11:56:04 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.931 11:56:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.931 ************************************ 00:07:14.931 END TEST app_cmdline 00:07:14.931 ************************************ 00:07:15.190 11:56:04 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.191 11:56:04 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:15.191 11:56:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:15.191 11:56:04 -- common/autotest_common.sh@10 -- # set +x 00:07:15.191 ************************************ 00:07:15.191 START TEST version 00:07:15.191 ************************************ 00:07:15.191 11:56:04 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.191 * Looking for test storage... 00:07:15.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.191 11:56:04 version -- app/version.sh@17 -- # get_header_version major 00:07:15.191 11:56:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # cut -f2 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.191 11:56:04 version -- app/version.sh@17 -- # major=24 00:07:15.191 11:56:04 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.191 11:56:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # cut -f2 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.191 11:56:04 version -- app/version.sh@18 -- # minor=9 00:07:15.191 11:56:04 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.191 11:56:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # cut -f2 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.191 11:56:04 version -- app/version.sh@19 -- # patch=0 00:07:15.191 11:56:04 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.191 11:56:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # cut -f2 00:07:15.191 11:56:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.191 11:56:04 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.191 11:56:04 version -- app/version.sh@22 -- # version=24.9 00:07:15.191 11:56:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.191 11:56:04 version -- app/version.sh@28 -- # version=24.9rc0 00:07:15.191 11:56:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:15.191 11:56:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.191 11:56:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:15.191 11:56:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:15.191 00:07:15.191 real 0m0.169s 00:07:15.191 user 0m0.084s 00:07:15.191 sys 0m0.125s 00:07:15.191 11:56:04 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:15.191 11:56:04 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.191 ************************************ 00:07:15.191 END TEST version 00:07:15.191 ************************************ 00:07:15.191 11:56:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:15.191 11:56:04 -- spdk/autotest.sh@198 -- # uname -s 00:07:15.495 11:56:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:15.495 11:56:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.495 11:56:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:15.495 11:56:04 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:15.495 11:56:04 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:15.495 11:56:04 -- common/autotest_common.sh@10 -- # set +x 00:07:15.495 11:56:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:15.495 11:56:04 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:15.495 11:56:04 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.495 11:56:04 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:15.495 11:56:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:15.495 11:56:04 -- common/autotest_common.sh@10 -- # set +x 00:07:15.495 ************************************ 00:07:15.495 START TEST nvmf_tcp 00:07:15.495 ************************************ 00:07:15.495 11:56:04 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.495 * Looking for test storage... 00:07:15.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.495 11:56:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.495 11:56:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.495 11:56:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.495 11:56:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.495 11:56:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.495 11:56:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.495 11:56:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:15.495 11:56:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:15.495 11:56:04 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:15.495 11:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:15.495 11:56:04 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:15.495 11:56:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:15.495 11:56:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:15.495 11:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.495 ************************************ 00:07:15.495 START TEST nvmf_example 00:07:15.495 ************************************ 00:07:15.495 11:56:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:15.816 * Looking for test storage... 00:07:15.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.816 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.817 11:56:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:22.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:22.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:22.398 Found net devices under 0000:af:00.0: cvl_0_0 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.398 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:22.399 Found net devices under 0000:af:00.1: cvl_0_1 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:07:22.399 00:07:22.399 --- 10.0.0.2 ping statistics --- 00:07:22.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.399 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:07:22.399 00:07:22.399 --- 10.0.0.1 ping statistics --- 00:07:22.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.399 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2051503 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2051503 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 2051503 ']' 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:22.399 11:56:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.659 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:23.595 11:56:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:23.595 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.572 Initializing NVMe Controllers 00:07:33.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:33.572 Initialization complete. Launching workers. 00:07:33.572 ======================================================== 00:07:33.572 Latency(us) 00:07:33.572 Device Information : IOPS MiB/s Average min max 00:07:33.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18937.03 73.97 3379.32 548.06 15475.46 00:07:33.572 ======================================================== 00:07:33.572 Total : 18937.03 73.97 3379.32 548.06 15475.46 00:07:33.572 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.572 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.572 rmmod nvme_tcp 00:07:33.572 rmmod nvme_fabrics 00:07:33.832 rmmod nvme_keyring 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2051503 ']' 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2051503 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 2051503 ']' 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 2051503 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2051503 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2051503' 00:07:33.832 killing process with pid 2051503 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 2051503 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 2051503 00:07:33.832 nvmf threads initialize successfully 00:07:33.832 bdev subsystem init successfully 00:07:33.832 created a nvmf target service 00:07:33.832 create targets's poll groups done 00:07:33.832 all subsystems of target started 00:07:33.832 nvmf target is running 00:07:33.832 all subsystems of target stopped 00:07:33.832 destroy targets's poll groups done 00:07:33.832 destroyed the nvmf target service 00:07:33.832 bdev subsystem finish successfully 00:07:33.832 nvmf threads destroy successfully 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.832 11:56:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.373 11:56:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:36.373 11:56:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:36.373 11:56:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:36.373 11:56:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.373 00:07:36.373 real 0m20.513s 00:07:36.373 user 0m45.400s 00:07:36.373 sys 0m7.186s 00:07:36.373 11:56:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.373 11:56:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.373 ************************************ 00:07:36.373 END TEST nvmf_example 00:07:36.373 ************************************ 00:07:36.373 11:56:25 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:36.373 11:56:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:36.373 11:56:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.373 11:56:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.373 ************************************ 00:07:36.373 START TEST nvmf_filesystem 00:07:36.373 ************************************ 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:36.373 * Looking for test storage... 00:07:36.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:36.373 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:36.374 #define SPDK_CONFIG_H 00:07:36.374 #define SPDK_CONFIG_APPS 1 00:07:36.374 #define SPDK_CONFIG_ARCH native 00:07:36.374 #undef SPDK_CONFIG_ASAN 00:07:36.374 #undef SPDK_CONFIG_AVAHI 00:07:36.374 #undef SPDK_CONFIG_CET 00:07:36.374 #define SPDK_CONFIG_COVERAGE 1 00:07:36.374 #define SPDK_CONFIG_CROSS_PREFIX 00:07:36.374 #undef SPDK_CONFIG_CRYPTO 00:07:36.374 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:36.374 #undef SPDK_CONFIG_CUSTOMOCF 00:07:36.374 #undef SPDK_CONFIG_DAOS 00:07:36.374 #define SPDK_CONFIG_DAOS_DIR 00:07:36.374 #define SPDK_CONFIG_DEBUG 1 00:07:36.374 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:36.374 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:36.374 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:36.374 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:36.374 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:36.374 #undef SPDK_CONFIG_DPDK_UADK 00:07:36.374 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:36.374 #define SPDK_CONFIG_EXAMPLES 1 00:07:36.374 #undef SPDK_CONFIG_FC 00:07:36.374 #define SPDK_CONFIG_FC_PATH 00:07:36.374 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:36.374 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:36.374 #undef SPDK_CONFIG_FUSE 00:07:36.374 #undef SPDK_CONFIG_FUZZER 00:07:36.374 #define SPDK_CONFIG_FUZZER_LIB 00:07:36.374 #undef SPDK_CONFIG_GOLANG 00:07:36.374 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:36.374 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:36.374 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:36.374 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:36.374 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:36.374 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:36.374 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:36.374 #define SPDK_CONFIG_IDXD 1 00:07:36.374 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:36.374 #undef SPDK_CONFIG_IPSEC_MB 00:07:36.374 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:36.374 #define SPDK_CONFIG_ISAL 1 00:07:36.374 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:36.374 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:36.374 #define SPDK_CONFIG_LIBDIR 00:07:36.374 #undef SPDK_CONFIG_LTO 00:07:36.374 #define SPDK_CONFIG_MAX_LCORES 00:07:36.374 #define SPDK_CONFIG_NVME_CUSE 1 00:07:36.374 #undef SPDK_CONFIG_OCF 00:07:36.374 #define SPDK_CONFIG_OCF_PATH 00:07:36.374 #define SPDK_CONFIG_OPENSSL_PATH 00:07:36.374 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:36.374 #define SPDK_CONFIG_PGO_DIR 00:07:36.374 #undef SPDK_CONFIG_PGO_USE 00:07:36.374 #define SPDK_CONFIG_PREFIX /usr/local 00:07:36.374 #undef SPDK_CONFIG_RAID5F 00:07:36.374 #undef SPDK_CONFIG_RBD 00:07:36.374 #define SPDK_CONFIG_RDMA 1 00:07:36.374 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:36.374 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:36.374 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:36.374 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:36.374 #define SPDK_CONFIG_SHARED 1 00:07:36.374 #undef SPDK_CONFIG_SMA 00:07:36.374 #define SPDK_CONFIG_TESTS 1 00:07:36.374 #undef SPDK_CONFIG_TSAN 00:07:36.374 #define SPDK_CONFIG_UBLK 1 00:07:36.374 #define SPDK_CONFIG_UBSAN 1 00:07:36.374 #undef SPDK_CONFIG_UNIT_TESTS 00:07:36.374 #undef SPDK_CONFIG_URING 00:07:36.374 #define SPDK_CONFIG_URING_PATH 00:07:36.374 #undef SPDK_CONFIG_URING_ZNS 00:07:36.374 #undef SPDK_CONFIG_USDT 00:07:36.374 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:36.374 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:36.374 #define SPDK_CONFIG_VFIO_USER 1 00:07:36.374 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:36.374 #define SPDK_CONFIG_VHOST 1 00:07:36.374 #define SPDK_CONFIG_VIRTIO 1 00:07:36.374 #undef SPDK_CONFIG_VTUNE 00:07:36.374 #define SPDK_CONFIG_VTUNE_DIR 00:07:36.374 #define SPDK_CONFIG_WERROR 1 00:07:36.374 #define SPDK_CONFIG_WPDK_DIR 00:07:36.374 #undef SPDK_CONFIG_XNVME 00:07:36.374 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.374 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:36.375 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2053872 ]] 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2053872 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:36.376 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.QKVxJN 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QKVxJN/tests/target /tmp/spdk.QKVxJN 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=957145088 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327284736 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56001589248 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5740716032 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867775488 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12339077120 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9383936 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30870663168 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=491520 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:36.377 * Looking for test storage... 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56001589248 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7955308544 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:36.377 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.378 11:56:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:42.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:42.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:42.955 Found net devices under 0000:af:00.0: cvl_0_0 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:42.955 Found net devices under 0000:af:00.1: cvl_0_1 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.955 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:43.215 00:07:43.215 --- 10.0.0.2 ping statistics --- 00:07:43.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.215 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:07:43.215 00:07:43.215 --- 10.0.0.1 ping statistics --- 00:07:43.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.215 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.215 ************************************ 00:07:43.215 START TEST nvmf_filesystem_no_in_capsule 00:07:43.215 ************************************ 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2057146 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2057146 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2057146 ']' 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.215 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:43.216 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.216 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:43.216 11:56:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.475 [2024-06-10 11:56:32.751446] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:43.475 [2024-06-10 11:56:32.751504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.475 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.475 [2024-06-10 11:56:32.825652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.475 [2024-06-10 11:56:32.902809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.475 [2024-06-10 11:56:32.902846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.475 [2024-06-10 11:56:32.902859] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.475 [2024-06-10 11:56:32.902867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.475 [2024-06-10 11:56:32.902874] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.475 [2024-06-10 11:56:32.902919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.475 [2024-06-10 11:56:32.903015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.475 [2024-06-10 11:56:32.903099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.475 [2024-06-10 11:56:32.903100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 [2024-06-10 11:56:33.619497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 [2024-06-10 11:56:33.772716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:44.414 { 00:07:44.414 "name": "Malloc1", 00:07:44.414 "aliases": [ 00:07:44.414 "725776bc-fa7a-42ab-95d2-2f6e7d03682f" 00:07:44.414 ], 00:07:44.414 "product_name": "Malloc disk", 00:07:44.414 "block_size": 512, 00:07:44.414 "num_blocks": 1048576, 00:07:44.414 "uuid": "725776bc-fa7a-42ab-95d2-2f6e7d03682f", 00:07:44.414 "assigned_rate_limits": { 00:07:44.414 "rw_ios_per_sec": 0, 00:07:44.414 "rw_mbytes_per_sec": 0, 00:07:44.414 "r_mbytes_per_sec": 0, 00:07:44.414 "w_mbytes_per_sec": 0 00:07:44.414 }, 00:07:44.414 "claimed": true, 00:07:44.414 "claim_type": "exclusive_write", 00:07:44.414 "zoned": false, 00:07:44.414 "supported_io_types": { 00:07:44.414 "read": true, 00:07:44.414 "write": true, 00:07:44.414 "unmap": true, 00:07:44.414 "write_zeroes": true, 00:07:44.414 "flush": true, 00:07:44.414 "reset": true, 00:07:44.414 "compare": false, 00:07:44.414 "compare_and_write": false, 00:07:44.414 "abort": true, 00:07:44.414 "nvme_admin": false, 00:07:44.414 "nvme_io": false 00:07:44.414 }, 00:07:44.414 "memory_domains": [ 00:07:44.414 { 00:07:44.414 "dma_device_id": "system", 00:07:44.414 "dma_device_type": 1 00:07:44.414 }, 00:07:44.414 { 00:07:44.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.414 "dma_device_type": 2 00:07:44.414 } 00:07:44.414 ], 00:07:44.414 "driver_specific": {} 00:07:44.414 } 00:07:44.414 ]' 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.414 11:56:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.792 11:56:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.792 11:56:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:45.793 11:56:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.793 11:56:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:45.793 11:56:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:48.328 11:56:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.706 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:49.706 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.706 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.707 ************************************ 00:07:49.707 START TEST filesystem_ext4 00:07:49.707 ************************************ 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:49.707 11:56:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.707 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.707 Discarding device blocks: 0/522240 done 00:07:49.707 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.707 Filesystem UUID: e8b36f57-34d0-442a-aabc-2b8526d31262 00:07:49.707 Superblock backups stored on blocks: 00:07:49.707 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.707 00:07:49.707 Allocating group tables: 0/64 done 00:07:49.707 Writing inode tables: 0/64 done 00:07:49.707 Creating journal (8192 blocks): done 00:07:49.707 Writing superblocks and filesystem accounting information: 0/64 done 00:07:49.707 00:07:49.707 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:49.707 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2057146 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.967 00:07:49.967 real 0m0.466s 00:07:49.967 user 0m0.035s 00:07:49.967 sys 0m0.071s 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:49.967 ************************************ 00:07:49.967 END TEST filesystem_ext4 00:07:49.967 ************************************ 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.967 ************************************ 00:07:49.967 START TEST filesystem_btrfs 00:07:49.967 ************************************ 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:49.967 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.226 btrfs-progs v6.6.2 00:07:50.226 See https://btrfs.readthedocs.io for more information. 00:07:50.226 00:07:50.226 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.226 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.226 this does not affect your deployments: 00:07:50.226 - DUP for metadata (-m dup) 00:07:50.227 - enabled no-holes (-O no-holes) 00:07:50.227 - enabled free-space-tree (-R free-space-tree) 00:07:50.227 00:07:50.227 Label: (null) 00:07:50.227 UUID: 9270571a-150f-4aa9-9d48-042a1e325097 00:07:50.227 Node size: 16384 00:07:50.227 Sector size: 4096 00:07:50.227 Filesystem size: 510.00MiB 00:07:50.227 Block group profiles: 00:07:50.227 Data: single 8.00MiB 00:07:50.227 Metadata: DUP 32.00MiB 00:07:50.227 System: DUP 8.00MiB 00:07:50.227 SSD detected: yes 00:07:50.227 Zoned device: no 00:07:50.227 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.227 Runtime features: free-space-tree 00:07:50.227 Checksum: crc32c 00:07:50.227 Number of devices: 1 00:07:50.227 Devices: 00:07:50.227 ID SIZE PATH 00:07:50.227 1 510.00MiB /dev/nvme0n1p1 00:07:50.227 00:07:50.227 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:50.227 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.486 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.486 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:50.486 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.486 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:50.486 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:50.486 11:56:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2057146 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.745 00:07:50.745 real 0m0.605s 00:07:50.745 user 0m0.042s 00:07:50.745 sys 0m0.122s 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.745 ************************************ 00:07:50.745 END TEST filesystem_btrfs 00:07:50.745 ************************************ 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.745 ************************************ 00:07:50.745 START TEST filesystem_xfs 00:07:50.745 ************************************ 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:50.745 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:50.746 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:50.746 11:56:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:50.746 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:50.746 = sectsz=512 attr=2, projid32bit=1 00:07:50.746 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:50.746 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:50.746 data = bsize=4096 blocks=130560, imaxpct=25 00:07:50.746 = sunit=0 swidth=0 blks 00:07:50.746 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:50.746 log =internal log bsize=4096 blocks=16384, version=2 00:07:50.746 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:50.746 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.120 Discarding blocks...Done. 00:07:52.120 11:56:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:52.120 11:56:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2057146 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.728 00:07:54.728 real 0m3.606s 00:07:54.728 user 0m0.029s 00:07:54.728 sys 0m0.085s 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:54.728 ************************************ 00:07:54.728 END TEST filesystem_xfs 00:07:54.728 ************************************ 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2057146 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2057146 ']' 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2057146 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:54.728 11:56:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2057146 00:07:54.728 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:54.728 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:54.728 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2057146' 00:07:54.728 killing process with pid 2057146 00:07:54.728 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 2057146 00:07:54.728 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 2057146 00:07:54.988 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:54.988 00:07:54.989 real 0m11.693s 00:07:54.989 user 0m45.562s 00:07:54.989 sys 0m1.723s 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.989 ************************************ 00:07:54.989 END TEST nvmf_filesystem_no_in_capsule 00:07:54.989 ************************************ 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.989 ************************************ 00:07:54.989 START TEST nvmf_filesystem_in_capsule 00:07:54.989 ************************************ 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2059454 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2059454 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2059454 ']' 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.989 11:56:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.248 [2024-06-10 11:56:44.520955] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:07:55.248 [2024-06-10 11:56:44.521003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.248 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.248 [2024-06-10 11:56:44.593910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.248 [2024-06-10 11:56:44.666730] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.248 [2024-06-10 11:56:44.666770] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.248 [2024-06-10 11:56:44.666780] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.248 [2024-06-10 11:56:44.666808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.248 [2024-06-10 11:56:44.666815] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.248 [2024-06-10 11:56:44.666872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.248 [2024-06-10 11:56:44.666964] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.248 [2024-06-10 11:56:44.667051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.248 [2024-06-10 11:56:44.667053] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.816 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:55.816 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:55.816 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.816 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:55.816 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 [2024-06-10 11:56:45.364264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 [2024-06-10 11:56:45.519673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:56.076 { 00:07:56.076 "name": "Malloc1", 00:07:56.076 "aliases": [ 00:07:56.076 "acb453e0-2b18-4f8e-92bf-61c781569667" 00:07:56.076 ], 00:07:56.076 "product_name": "Malloc disk", 00:07:56.076 "block_size": 512, 00:07:56.076 "num_blocks": 1048576, 00:07:56.076 "uuid": "acb453e0-2b18-4f8e-92bf-61c781569667", 00:07:56.076 "assigned_rate_limits": { 00:07:56.076 "rw_ios_per_sec": 0, 00:07:56.076 "rw_mbytes_per_sec": 0, 00:07:56.076 "r_mbytes_per_sec": 0, 00:07:56.076 "w_mbytes_per_sec": 0 00:07:56.076 }, 00:07:56.076 "claimed": true, 00:07:56.076 "claim_type": "exclusive_write", 00:07:56.076 "zoned": false, 00:07:56.076 "supported_io_types": { 00:07:56.076 "read": true, 00:07:56.076 "write": true, 00:07:56.076 "unmap": true, 00:07:56.076 "write_zeroes": true, 00:07:56.076 "flush": true, 00:07:56.076 "reset": true, 00:07:56.076 "compare": false, 00:07:56.076 "compare_and_write": false, 00:07:56.076 "abort": true, 00:07:56.076 "nvme_admin": false, 00:07:56.076 "nvme_io": false 00:07:56.076 }, 00:07:56.076 "memory_domains": [ 00:07:56.076 { 00:07:56.076 "dma_device_id": "system", 00:07:56.076 "dma_device_type": 1 00:07:56.076 }, 00:07:56.076 { 00:07:56.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.076 "dma_device_type": 2 00:07:56.076 } 00:07:56.076 ], 00:07:56.076 "driver_specific": {} 00:07:56.076 } 00:07:56.076 ]' 00:07:56.076 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:56.334 11:56:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.710 11:56:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.710 11:56:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:57.710 11:56:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.710 11:56:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:57.710 11:56:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:59.629 11:56:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:59.629 11:56:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:59.629 11:56:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:59.629 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:00.195 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:00.453 11:56:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.390 ************************************ 00:08:01.390 START TEST filesystem_in_capsule_ext4 00:08:01.390 ************************************ 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:01.390 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:08:01.391 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:08:01.391 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:08:01.391 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:08:01.391 11:56:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:01.391 mke2fs 1.46.5 (30-Dec-2021) 00:08:01.649 Discarding device blocks: 0/522240 done 00:08:01.649 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:01.649 Filesystem UUID: f1c8f9ad-0a9e-4308-9af6-b4b7f7a084ca 00:08:01.649 Superblock backups stored on blocks: 00:08:01.649 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:01.649 00:08:01.649 Allocating group tables: 0/64 done 00:08:01.649 Writing inode tables: 0/64 done 00:08:01.906 Creating journal (8192 blocks): done 00:08:02.840 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:08:02.840 00:08:02.840 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:08:02.840 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.098 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2059454 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.356 00:08:03.356 real 0m1.797s 00:08:03.356 user 0m0.036s 00:08:03.356 sys 0m0.071s 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.356 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:03.356 ************************************ 00:08:03.356 END TEST filesystem_in_capsule_ext4 00:08:03.357 ************************************ 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.357 ************************************ 00:08:03.357 START TEST filesystem_in_capsule_btrfs 00:08:03.357 ************************************ 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:08:03.357 11:56:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:03.615 btrfs-progs v6.6.2 00:08:03.615 See https://btrfs.readthedocs.io for more information. 00:08:03.615 00:08:03.615 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:03.615 NOTE: several default settings have changed in version 5.15, please make sure 00:08:03.615 this does not affect your deployments: 00:08:03.615 - DUP for metadata (-m dup) 00:08:03.615 - enabled no-holes (-O no-holes) 00:08:03.615 - enabled free-space-tree (-R free-space-tree) 00:08:03.615 00:08:03.615 Label: (null) 00:08:03.615 UUID: 05084363-a984-43c1-98f2-6ef4cb0908d4 00:08:03.615 Node size: 16384 00:08:03.615 Sector size: 4096 00:08:03.615 Filesystem size: 510.00MiB 00:08:03.615 Block group profiles: 00:08:03.615 Data: single 8.00MiB 00:08:03.615 Metadata: DUP 32.00MiB 00:08:03.615 System: DUP 8.00MiB 00:08:03.615 SSD detected: yes 00:08:03.615 Zoned device: no 00:08:03.615 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:03.615 Runtime features: free-space-tree 00:08:03.615 Checksum: crc32c 00:08:03.615 Number of devices: 1 00:08:03.615 Devices: 00:08:03.615 ID SIZE PATH 00:08:03.615 1 510.00MiB /dev/nvme0n1p1 00:08:03.615 00:08:03.615 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:08:03.615 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2059454 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.191 00:08:04.191 real 0m0.763s 00:08:04.191 user 0m0.022s 00:08:04.191 sys 0m0.147s 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 ************************************ 00:08:04.191 END TEST filesystem_in_capsule_btrfs 00:08:04.191 ************************************ 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:04.191 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.192 ************************************ 00:08:04.192 START TEST filesystem_in_capsule_xfs 00:08:04.192 ************************************ 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:08:04.192 11:56:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.192 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.192 = sectsz=512 attr=2, projid32bit=1 00:08:04.192 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.192 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.192 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.192 = sunit=0 swidth=0 blks 00:08:04.192 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.192 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.192 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.192 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.452 Discarding blocks...Done. 00:08:05.452 11:56:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:05.452 11:56:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2059454 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.830 00:08:06.830 real 0m2.687s 00:08:06.830 user 0m0.028s 00:08:06.830 sys 0m0.083s 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:06.830 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.830 ************************************ 00:08:06.830 END TEST filesystem_in_capsule_xfs 00:08:06.830 ************************************ 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2059454 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2059454 ']' 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2059454 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:07.089 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2059454 00:08:07.347 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:07.347 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:07.347 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2059454' 00:08:07.347 killing process with pid 2059454 00:08:07.347 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 2059454 00:08:07.347 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 2059454 00:08:07.606 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:07.606 00:08:07.606 real 0m12.510s 00:08:07.606 user 0m48.869s 00:08:07.606 sys 0m1.686s 00:08:07.606 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.606 11:56:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.606 ************************************ 00:08:07.606 END TEST nvmf_filesystem_in_capsule 00:08:07.606 ************************************ 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.606 rmmod nvme_tcp 00:08:07.606 rmmod nvme_fabrics 00:08:07.606 rmmod nvme_keyring 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.606 11:56:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.141 11:56:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.141 00:08:10.141 real 0m33.633s 00:08:10.141 user 1m36.508s 00:08:10.141 sys 0m8.808s 00:08:10.141 11:56:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.141 11:56:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.141 ************************************ 00:08:10.141 END TEST nvmf_filesystem 00:08:10.141 ************************************ 00:08:10.141 11:56:59 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.141 11:56:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:10.141 11:56:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.141 11:56:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.141 ************************************ 00:08:10.141 START TEST nvmf_target_discovery 00:08:10.141 ************************************ 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.141 * Looking for test storage... 00:08:10.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.141 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.142 11:56:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:16.709 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:16.709 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.709 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:16.710 Found net devices under 0000:af:00.0: cvl_0_0 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:16.710 Found net devices under 0000:af:00.1: cvl_0_1 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.710 11:57:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:08:16.710 00:08:16.710 --- 10.0.0.2 ping statistics --- 00:08:16.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.710 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:08:16.710 00:08:16.710 --- 10.0.0.1 ping statistics --- 00:08:16.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.710 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2065978 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2065978 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 2065978 ']' 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:16.710 11:57:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.971 [2024-06-10 11:57:06.248040] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:16.971 [2024-06-10 11:57:06.248090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.971 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.971 [2024-06-10 11:57:06.322351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.971 [2024-06-10 11:57:06.392963] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.971 [2024-06-10 11:57:06.393005] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.971 [2024-06-10 11:57:06.393015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.971 [2024-06-10 11:57:06.393023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.971 [2024-06-10 11:57:06.393046] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.971 [2024-06-10 11:57:06.393101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.971 [2024-06-10 11:57:06.393193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.971 [2024-06-10 11:57:06.393279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.971 [2024-06-10 11:57:06.393281] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.908 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:17.908 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 [2024-06-10 11:57:07.108427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 Null1 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 [2024-06-10 11:57:07.160754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 Null2 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 Null3 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 Null4 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.909 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:08:17.909 00:08:17.909 Discovery Log Number of Records 6, Generation counter 6 00:08:17.909 =====Discovery Log Entry 0====== 00:08:17.909 trtype: tcp 00:08:17.909 adrfam: ipv4 00:08:17.909 subtype: current discovery subsystem 00:08:17.909 treq: not required 00:08:17.909 portid: 0 00:08:17.909 trsvcid: 4420 00:08:17.909 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:17.909 traddr: 10.0.0.2 00:08:17.909 eflags: explicit discovery connections, duplicate discovery information 00:08:17.909 sectype: none 00:08:17.909 =====Discovery Log Entry 1====== 00:08:17.909 trtype: tcp 00:08:17.909 adrfam: ipv4 00:08:17.909 subtype: nvme subsystem 00:08:17.909 treq: not required 00:08:17.909 portid: 0 00:08:17.909 trsvcid: 4420 00:08:17.909 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:17.909 traddr: 10.0.0.2 00:08:17.909 eflags: none 00:08:17.909 sectype: none 00:08:17.909 =====Discovery Log Entry 2====== 00:08:17.909 trtype: tcp 00:08:17.909 adrfam: ipv4 00:08:17.909 subtype: nvme subsystem 00:08:17.909 treq: not required 00:08:17.909 portid: 0 00:08:17.909 trsvcid: 4420 00:08:17.909 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:17.909 traddr: 10.0.0.2 00:08:17.909 eflags: none 00:08:17.909 sectype: none 00:08:17.909 =====Discovery Log Entry 3====== 00:08:17.909 trtype: tcp 00:08:17.909 adrfam: ipv4 00:08:17.909 subtype: nvme subsystem 00:08:17.909 treq: not required 00:08:17.909 portid: 0 00:08:17.909 trsvcid: 4420 00:08:17.909 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:17.910 traddr: 10.0.0.2 00:08:17.910 eflags: none 00:08:17.910 sectype: none 00:08:17.910 =====Discovery Log Entry 4====== 00:08:17.910 trtype: tcp 00:08:17.910 adrfam: ipv4 00:08:17.910 subtype: nvme subsystem 00:08:17.910 treq: not required 00:08:17.910 portid: 0 00:08:17.910 trsvcid: 4420 00:08:17.910 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:17.910 traddr: 10.0.0.2 00:08:17.910 eflags: none 00:08:17.910 sectype: none 00:08:17.910 =====Discovery Log Entry 5====== 00:08:17.910 trtype: tcp 00:08:17.910 adrfam: ipv4 00:08:17.910 subtype: discovery subsystem referral 00:08:17.910 treq: not required 00:08:17.910 portid: 0 00:08:17.910 trsvcid: 4430 00:08:17.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:17.910 traddr: 10.0.0.2 00:08:17.910 eflags: none 00:08:17.910 sectype: none 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:17.910 Perform nvmf subsystem discovery via RPC 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 [ 00:08:17.910 { 00:08:17.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:17.910 "subtype": "Discovery", 00:08:17.910 "listen_addresses": [ 00:08:17.910 { 00:08:17.910 "trtype": "TCP", 00:08:17.910 "adrfam": "IPv4", 00:08:17.910 "traddr": "10.0.0.2", 00:08:17.910 "trsvcid": "4420" 00:08:17.910 } 00:08:17.910 ], 00:08:17.910 "allow_any_host": true, 00:08:17.910 "hosts": [] 00:08:17.910 }, 00:08:17.910 { 00:08:17.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.910 "subtype": "NVMe", 00:08:17.910 "listen_addresses": [ 00:08:17.910 { 00:08:17.910 "trtype": "TCP", 00:08:17.910 "adrfam": "IPv4", 00:08:17.910 "traddr": "10.0.0.2", 00:08:17.910 "trsvcid": "4420" 00:08:17.910 } 00:08:17.910 ], 00:08:17.910 "allow_any_host": true, 00:08:17.910 "hosts": [], 00:08:17.910 "serial_number": "SPDK00000000000001", 00:08:17.910 "model_number": "SPDK bdev Controller", 00:08:17.910 "max_namespaces": 32, 00:08:17.910 "min_cntlid": 1, 00:08:17.910 "max_cntlid": 65519, 00:08:17.910 "namespaces": [ 00:08:17.910 { 00:08:17.910 "nsid": 1, 00:08:17.910 "bdev_name": "Null1", 00:08:17.910 "name": "Null1", 00:08:17.910 "nguid": "C23FFB79FDD7494A95385C368DBD4FD4", 00:08:17.910 "uuid": "c23ffb79-fdd7-494a-9538-5c368dbd4fd4" 00:08:17.910 } 00:08:17.910 ] 00:08:17.910 }, 00:08:17.910 { 00:08:17.910 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:17.910 "subtype": "NVMe", 00:08:17.910 "listen_addresses": [ 00:08:17.910 { 00:08:17.910 "trtype": "TCP", 00:08:17.910 "adrfam": "IPv4", 00:08:17.910 "traddr": "10.0.0.2", 00:08:17.910 "trsvcid": "4420" 00:08:17.910 } 00:08:17.910 ], 00:08:17.910 "allow_any_host": true, 00:08:17.910 "hosts": [], 00:08:17.910 "serial_number": "SPDK00000000000002", 00:08:17.910 "model_number": "SPDK bdev Controller", 00:08:17.910 "max_namespaces": 32, 00:08:17.910 "min_cntlid": 1, 00:08:17.910 "max_cntlid": 65519, 00:08:17.910 "namespaces": [ 00:08:17.910 { 00:08:17.910 "nsid": 1, 00:08:17.910 "bdev_name": "Null2", 00:08:17.910 "name": "Null2", 00:08:17.910 "nguid": "AA823A769F154B47B2FB8C05AC951FB1", 00:08:17.910 "uuid": "aa823a76-9f15-4b47-b2fb-8c05ac951fb1" 00:08:17.910 } 00:08:17.910 ] 00:08:17.910 }, 00:08:17.910 { 00:08:17.910 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:17.910 "subtype": "NVMe", 00:08:17.910 "listen_addresses": [ 00:08:17.910 { 00:08:17.910 "trtype": "TCP", 00:08:17.910 "adrfam": "IPv4", 00:08:17.910 "traddr": "10.0.0.2", 00:08:17.910 "trsvcid": "4420" 00:08:17.910 } 00:08:17.910 ], 00:08:17.910 "allow_any_host": true, 00:08:17.910 "hosts": [], 00:08:17.910 "serial_number": "SPDK00000000000003", 00:08:17.910 "model_number": "SPDK bdev Controller", 00:08:17.910 "max_namespaces": 32, 00:08:17.910 "min_cntlid": 1, 00:08:17.910 "max_cntlid": 65519, 00:08:17.910 "namespaces": [ 00:08:17.910 { 00:08:17.910 "nsid": 1, 00:08:17.910 "bdev_name": "Null3", 00:08:17.910 "name": "Null3", 00:08:17.910 "nguid": "101C592F95484B35BA962BD39F463BA1", 00:08:17.910 "uuid": "101c592f-9548-4b35-ba96-2bd39f463ba1" 00:08:17.910 } 00:08:17.910 ] 00:08:17.910 }, 00:08:17.910 { 00:08:17.910 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:17.910 "subtype": "NVMe", 00:08:17.910 "listen_addresses": [ 00:08:17.910 { 00:08:17.910 "trtype": "TCP", 00:08:17.910 "adrfam": "IPv4", 00:08:17.910 "traddr": "10.0.0.2", 00:08:17.910 "trsvcid": "4420" 00:08:17.910 } 00:08:17.910 ], 00:08:17.910 "allow_any_host": true, 00:08:17.910 "hosts": [], 00:08:17.910 "serial_number": "SPDK00000000000004", 00:08:17.910 "model_number": "SPDK bdev Controller", 00:08:17.910 "max_namespaces": 32, 00:08:17.910 "min_cntlid": 1, 00:08:17.910 "max_cntlid": 65519, 00:08:17.910 "namespaces": [ 00:08:17.910 { 00:08:17.910 "nsid": 1, 00:08:17.910 "bdev_name": "Null4", 00:08:17.910 "name": "Null4", 00:08:17.910 "nguid": "FA461B4303D44BC2BBED6D29BBC03E30", 00:08:17.910 "uuid": "fa461b43-03d4-4bc2-bbed-6d29bbc03e30" 00:08:17.910 } 00:08:17.910 ] 00:08:17.910 } 00:08:17.910 ] 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.910 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.170 rmmod nvme_tcp 00:08:18.170 rmmod nvme_fabrics 00:08:18.170 rmmod nvme_keyring 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2065978 ']' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2065978 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 2065978 ']' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 2065978 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2065978 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2065978' 00:08:18.170 killing process with pid 2065978 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 2065978 00:08:18.170 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 2065978 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.430 11:57:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.966 11:57:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.966 00:08:20.966 real 0m10.666s 00:08:20.966 user 0m7.827s 00:08:20.966 sys 0m5.598s 00:08:20.966 11:57:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:20.966 11:57:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.966 ************************************ 00:08:20.966 END TEST nvmf_target_discovery 00:08:20.966 ************************************ 00:08:20.966 11:57:09 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:20.966 11:57:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:20.966 11:57:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:20.966 11:57:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.966 ************************************ 00:08:20.966 START TEST nvmf_referrals 00:08:20.966 ************************************ 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:20.966 * Looking for test storage... 00:08:20.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.966 11:57:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:27.607 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.607 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:27.607 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:27.608 Found net devices under 0000:af:00.0: cvl_0_0 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:27.608 Found net devices under 0000:af:00.1: cvl_0_1 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:27.608 00:08:27.608 --- 10.0.0.2 ping statistics --- 00:08:27.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.608 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:08:27.608 00:08:27.608 --- 10.0.0.1 ping statistics --- 00:08:27.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.608 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2069875 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2069875 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 2069875 ']' 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:27.608 11:57:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.608 [2024-06-10 11:57:17.027975] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:27.608 [2024-06-10 11:57:17.028027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.608 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.608 [2024-06-10 11:57:17.103608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.868 [2024-06-10 11:57:17.180184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.868 [2024-06-10 11:57:17.180222] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.868 [2024-06-10 11:57:17.180232] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.868 [2024-06-10 11:57:17.180244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.868 [2024-06-10 11:57:17.180251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.868 [2024-06-10 11:57:17.180296] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.868 [2024-06-10 11:57:17.180381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.868 [2024-06-10 11:57:17.180464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.868 [2024-06-10 11:57:17.180465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 [2024-06-10 11:57:17.879327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 [2024-06-10 11:57:17.895521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.435 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:28.693 11:57:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.693 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.952 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.211 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:29.470 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.729 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.729 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.729 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.729 11:57:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.729 11:57:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.729 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.991 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:29.991 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.991 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.991 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:29.991 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.991 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.250 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.250 rmmod nvme_tcp 00:08:30.250 rmmod nvme_fabrics 00:08:30.509 rmmod nvme_keyring 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2069875 ']' 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2069875 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 2069875 ']' 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 2069875 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2069875 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2069875' 00:08:30.509 killing process with pid 2069875 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 2069875 00:08:30.509 11:57:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 2069875 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.768 11:57:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.673 11:57:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.673 00:08:32.673 real 0m12.111s 00:08:32.673 user 0m14.086s 00:08:32.673 sys 0m6.005s 00:08:32.673 11:57:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:32.673 11:57:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.673 ************************************ 00:08:32.673 END TEST nvmf_referrals 00:08:32.673 ************************************ 00:08:32.673 11:57:22 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:32.673 11:57:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:32.673 11:57:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:32.673 11:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.932 ************************************ 00:08:32.932 START TEST nvmf_connect_disconnect 00:08:32.932 ************************************ 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:32.932 * Looking for test storage... 00:08:32.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.932 11:57:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:39.495 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:39.495 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:39.495 Found net devices under 0000:af:00.0: cvl_0_0 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:39.495 Found net devices under 0000:af:00.1: cvl_0_1 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.495 11:57:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:39.754 00:08:39.754 --- 10.0.0.2 ping statistics --- 00:08:39.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.754 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:08:39.754 00:08:39.754 --- 10.0.0.1 ping statistics --- 00:08:39.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.754 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2074153 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2074153 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 2074153 ']' 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:39.754 11:57:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.754 [2024-06-10 11:57:29.195326] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:08:39.754 [2024-06-10 11:57:29.195374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.754 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.754 [2024-06-10 11:57:29.269024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.013 [2024-06-10 11:57:29.343922] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.013 [2024-06-10 11:57:29.343961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.013 [2024-06-10 11:57:29.343970] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.013 [2024-06-10 11:57:29.343979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.013 [2024-06-10 11:57:29.343986] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.013 [2024-06-10 11:57:29.344037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.013 [2024-06-10 11:57:29.344134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.013 [2024-06-10 11:57:29.344217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.013 [2024-06-10 11:57:29.344218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 [2024-06-10 11:57:30.068488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:40.578 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.836 [2024-06-10 11:57:30.123461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:40.836 11:57:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:44.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.168 rmmod nvme_tcp 00:08:58.168 rmmod nvme_fabrics 00:08:58.168 rmmod nvme_keyring 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2074153 ']' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2074153 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2074153 ']' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 2074153 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2074153 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2074153' 00:08:58.168 killing process with pid 2074153 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 2074153 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 2074153 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.168 11:57:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.755 11:57:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.755 00:09:00.755 real 0m27.505s 00:09:00.755 user 1m14.073s 00:09:00.755 sys 0m7.084s 00:09:00.755 11:57:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:00.755 11:57:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.755 ************************************ 00:09:00.755 END TEST nvmf_connect_disconnect 00:09:00.755 ************************************ 00:09:00.755 11:57:49 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:00.755 11:57:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:00.756 11:57:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:00.756 11:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.756 ************************************ 00:09:00.756 START TEST nvmf_multitarget 00:09:00.756 ************************************ 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:00.756 * Looking for test storage... 00:09:00.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.756 11:57:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.316 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:07.317 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:07.317 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:07.317 Found net devices under 0000:af:00.0: cvl_0_0 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:07.317 Found net devices under 0000:af:00.1: cvl_0_1 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:07.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:09:07.317 00:09:07.317 --- 10.0.0.2 ping statistics --- 00:09:07.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.317 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:07.317 00:09:07.317 --- 10.0.0.1 ping statistics --- 00:09:07.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.317 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:07.317 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:07.318 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2081153 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2081153 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2081153 ']' 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:07.576 11:57:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:07.576 [2024-06-10 11:57:56.888664] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:07.576 [2024-06-10 11:57:56.888714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.576 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.576 [2024-06-10 11:57:56.963712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.576 [2024-06-10 11:57:57.038300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.576 [2024-06-10 11:57:57.038336] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.576 [2024-06-10 11:57:57.038348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.576 [2024-06-10 11:57:57.038357] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.576 [2024-06-10 11:57:57.038364] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.576 [2024-06-10 11:57:57.038426] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.576 [2024-06-10 11:57:57.038512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.576 [2024-06-10 11:57:57.038559] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.576 [2024-06-10 11:57:57.038561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:08.509 "nvmf_tgt_1" 00:09:08.509 11:57:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:08.767 "nvmf_tgt_2" 00:09:08.767 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:08.767 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:08.767 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:08.767 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:08.767 true 00:09:08.767 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:09.025 true 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.025 rmmod nvme_tcp 00:09:09.025 rmmod nvme_fabrics 00:09:09.025 rmmod nvme_keyring 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2081153 ']' 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2081153 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2081153 ']' 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2081153 00:09:09.025 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2081153 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2081153' 00:09:09.283 killing process with pid 2081153 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2081153 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2081153 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.283 11:57:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.813 11:58:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.813 00:09:11.813 real 0m11.059s 00:09:11.813 user 0m9.481s 00:09:11.813 sys 0m5.852s 00:09:11.813 11:58:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:11.813 11:58:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:11.813 ************************************ 00:09:11.813 END TEST nvmf_multitarget 00:09:11.813 ************************************ 00:09:11.813 11:58:00 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:11.813 11:58:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:11.813 11:58:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:11.813 11:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.813 ************************************ 00:09:11.813 START TEST nvmf_rpc 00:09:11.813 ************************************ 00:09:11.813 11:58:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:11.813 * Looking for test storage... 00:09:11.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.813 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.814 11:58:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:18.383 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:18.383 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:18.383 Found net devices under 0000:af:00.0: cvl_0_0 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:18.383 Found net devices under 0000:af:00.1: cvl_0_1 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.383 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:09:18.384 00:09:18.384 --- 10.0.0.2 ping statistics --- 00:09:18.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.384 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:09:18.384 00:09:18.384 --- 10.0.0.1 ping statistics --- 00:09:18.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.384 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2085153 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2085153 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2085153 ']' 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:18.384 11:58:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.384 [2024-06-10 11:58:07.871876] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:18.384 [2024-06-10 11:58:07.871923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.642 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.642 [2024-06-10 11:58:07.946185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.642 [2024-06-10 11:58:08.020096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.643 [2024-06-10 11:58:08.020136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.643 [2024-06-10 11:58:08.020149] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.643 [2024-06-10 11:58:08.020173] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.643 [2024-06-10 11:58:08.020180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.643 [2024-06-10 11:58:08.020225] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.643 [2024-06-10 11:58:08.020317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.643 [2024-06-10 11:58:08.020405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.643 [2024-06-10 11:58:08.020406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.211 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:19.470 "tick_rate": 2500000000, 00:09:19.470 "poll_groups": [ 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_000", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [] 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_001", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [] 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_002", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [] 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_003", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [] 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 }' 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 [2024-06-10 11:58:08.837665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:19.470 "tick_rate": 2500000000, 00:09:19.470 "poll_groups": [ 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_000", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [ 00:09:19.470 { 00:09:19.470 "trtype": "TCP" 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_001", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [ 00:09:19.470 { 00:09:19.470 "trtype": "TCP" 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_002", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [ 00:09:19.470 { 00:09:19.470 "trtype": "TCP" 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 }, 00:09:19.470 { 00:09:19.470 "name": "nvmf_tgt_poll_group_003", 00:09:19.470 "admin_qpairs": 0, 00:09:19.470 "io_qpairs": 0, 00:09:19.470 "current_admin_qpairs": 0, 00:09:19.470 "current_io_qpairs": 0, 00:09:19.470 "pending_bdev_io": 0, 00:09:19.470 "completed_nvme_io": 0, 00:09:19.470 "transports": [ 00:09:19.470 { 00:09:19.470 "trtype": "TCP" 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 } 00:09:19.470 ] 00:09:19.470 }' 00:09:19.470 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.471 Malloc1 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.471 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:19.730 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.730 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.730 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.730 11:58:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.730 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.730 11:58:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.730 [2024-06-10 11:58:09.000576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:09:19.730 [2024-06-10 11:58:09.029126] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:09:19.730 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:19.730 could not add new controller: failed to write to nvme-fabrics device 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.730 11:58:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.108 11:58:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.108 11:58:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:21.108 11:58:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.108 11:58:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:21.108 11:58:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:23.013 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.013 [2024-06-10 11:58:12.526022] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:09:23.272 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:23.272 could not add new controller: failed to write to nvme-fabrics device 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.272 11:58:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.648 11:58:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.648 11:58:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:24.648 11:58:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.648 11:58:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:24.648 11:58:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.555 11:58:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.555 [2024-06-10 11:58:16.067575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.555 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.814 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.814 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.814 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.814 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.814 11:58:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.814 11:58:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.193 11:58:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.193 11:58:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:28.193 11:58:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.193 11:58:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:28.193 11:58:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.100 [2024-06-10 11:58:19.605966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.100 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.360 11:58:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.360 11:58:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.740 11:58:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.740 11:58:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:31.740 11:58:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.740 11:58:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:31.740 11:58:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:33.714 11:58:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 [2024-06-10 11:58:23.033431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.714 11:58:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.092 11:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:35.092 11:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:35.092 11:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.092 11:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:35.092 11:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:36.997 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 [2024-06-10 11:58:26.608751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:37.256 11:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.634 11:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.634 11:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:38.634 11:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.634 11:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:38.634 11:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:40.539 11:58:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.799 [2024-06-10 11:58:30.131262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:40.799 11:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.178 11:58:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.178 11:58:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:42.178 11:58:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.178 11:58:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:42.178 11:58:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.084 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 [2024-06-10 11:58:33.675013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 [2024-06-10 11:58:33.723135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.344 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.344 [2024-06-10 11:58:33.775285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 [2024-06-10 11:58:33.823448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.345 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.604 [2024-06-10 11:58:33.871621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.604 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:44.605 "tick_rate": 2500000000, 00:09:44.605 "poll_groups": [ 00:09:44.605 { 00:09:44.605 "name": "nvmf_tgt_poll_group_000", 00:09:44.605 "admin_qpairs": 2, 00:09:44.605 "io_qpairs": 196, 00:09:44.605 "current_admin_qpairs": 0, 00:09:44.605 "current_io_qpairs": 0, 00:09:44.605 "pending_bdev_io": 0, 00:09:44.605 "completed_nvme_io": 247, 00:09:44.605 "transports": [ 00:09:44.605 { 00:09:44.605 "trtype": "TCP" 00:09:44.605 } 00:09:44.605 ] 00:09:44.605 }, 00:09:44.605 { 00:09:44.605 "name": "nvmf_tgt_poll_group_001", 00:09:44.605 "admin_qpairs": 2, 00:09:44.605 "io_qpairs": 196, 00:09:44.605 "current_admin_qpairs": 0, 00:09:44.605 "current_io_qpairs": 0, 00:09:44.605 "pending_bdev_io": 0, 00:09:44.605 "completed_nvme_io": 296, 00:09:44.605 "transports": [ 00:09:44.605 { 00:09:44.605 "trtype": "TCP" 00:09:44.605 } 00:09:44.605 ] 00:09:44.605 }, 00:09:44.605 { 00:09:44.605 "name": "nvmf_tgt_poll_group_002", 00:09:44.605 "admin_qpairs": 1, 00:09:44.605 "io_qpairs": 196, 00:09:44.605 "current_admin_qpairs": 0, 00:09:44.605 "current_io_qpairs": 0, 00:09:44.605 "pending_bdev_io": 0, 00:09:44.605 "completed_nvme_io": 295, 00:09:44.605 "transports": [ 00:09:44.605 { 00:09:44.605 "trtype": "TCP" 00:09:44.605 } 00:09:44.605 ] 00:09:44.605 }, 00:09:44.605 { 00:09:44.605 "name": "nvmf_tgt_poll_group_003", 00:09:44.605 "admin_qpairs": 2, 00:09:44.605 "io_qpairs": 196, 00:09:44.605 "current_admin_qpairs": 0, 00:09:44.605 "current_io_qpairs": 0, 00:09:44.605 "pending_bdev_io": 0, 00:09:44.605 "completed_nvme_io": 296, 00:09:44.605 "transports": [ 00:09:44.605 { 00:09:44.605 "trtype": "TCP" 00:09:44.605 } 00:09:44.605 ] 00:09:44.605 } 00:09:44.605 ] 00:09:44.605 }' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:44.605 11:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.605 rmmod nvme_tcp 00:09:44.605 rmmod nvme_fabrics 00:09:44.605 rmmod nvme_keyring 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2085153 ']' 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2085153 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2085153 ']' 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2085153 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:44.605 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2085153 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2085153' 00:09:44.864 killing process with pid 2085153 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2085153 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2085153 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.864 11:58:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.403 11:58:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.403 00:09:47.403 real 0m35.489s 00:09:47.403 user 1m46.213s 00:09:47.403 sys 0m7.896s 00:09:47.403 11:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:47.403 11:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.403 ************************************ 00:09:47.403 END TEST nvmf_rpc 00:09:47.403 ************************************ 00:09:47.403 11:58:36 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:47.403 11:58:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:47.403 11:58:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:47.403 11:58:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.403 ************************************ 00:09:47.403 START TEST nvmf_invalid 00:09:47.403 ************************************ 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:47.403 * Looking for test storage... 00:09:47.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.403 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.404 11:58:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:53.975 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:53.975 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:53.975 Found net devices under 0000:af:00.0: cvl_0_0 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:53.975 Found net devices under 0000:af:00.1: cvl_0_1 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.975 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.976 11:58:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:09:53.976 00:09:53.976 --- 10.0.0.2 ping statistics --- 00:09:53.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.976 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:09:53.976 00:09:53.976 --- 10.0.0.1 ping statistics --- 00:09:53.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.976 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2093304 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2093304 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2093304 ']' 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.976 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.976 [2024-06-10 11:58:43.199291] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:09:53.976 [2024-06-10 11:58:43.199356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.976 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.976 [2024-06-10 11:58:43.274519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.976 [2024-06-10 11:58:43.348137] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.976 [2024-06-10 11:58:43.348179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.976 [2024-06-10 11:58:43.348188] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.976 [2024-06-10 11:58:43.348196] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.976 [2024-06-10 11:58:43.348202] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.976 [2024-06-10 11:58:43.348266] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.976 [2024-06-10 11:58:43.348360] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.976 [2024-06-10 11:58:43.348453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.976 [2024-06-10 11:58:43.348455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.544 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:54.544 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:09:54.544 11:58:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.544 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:54.544 11:58:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:54.544 11:58:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.544 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:54.544 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15042 00:09:54.802 [2024-06-10 11:58:44.197721] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:54.802 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:54.802 { 00:09:54.802 "nqn": "nqn.2016-06.io.spdk:cnode15042", 00:09:54.802 "tgt_name": "foobar", 00:09:54.802 "method": "nvmf_create_subsystem", 00:09:54.802 "req_id": 1 00:09:54.802 } 00:09:54.802 Got JSON-RPC error response 00:09:54.802 response: 00:09:54.802 { 00:09:54.802 "code": -32603, 00:09:54.802 "message": "Unable to find target foobar" 00:09:54.802 }' 00:09:54.802 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:54.802 { 00:09:54.802 "nqn": "nqn.2016-06.io.spdk:cnode15042", 00:09:54.802 "tgt_name": "foobar", 00:09:54.802 "method": "nvmf_create_subsystem", 00:09:54.802 "req_id": 1 00:09:54.802 } 00:09:54.802 Got JSON-RPC error response 00:09:54.802 response: 00:09:54.802 { 00:09:54.802 "code": -32603, 00:09:54.802 "message": "Unable to find target foobar" 00:09:54.802 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:54.802 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:54.803 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8385 00:09:55.061 [2024-06-10 11:58:44.382412] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8385: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:55.061 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:55.061 { 00:09:55.061 "nqn": "nqn.2016-06.io.spdk:cnode8385", 00:09:55.061 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:55.061 "method": "nvmf_create_subsystem", 00:09:55.061 "req_id": 1 00:09:55.061 } 00:09:55.061 Got JSON-RPC error response 00:09:55.061 response: 00:09:55.061 { 00:09:55.061 "code": -32602, 00:09:55.061 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:55.061 }' 00:09:55.061 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:55.061 { 00:09:55.061 "nqn": "nqn.2016-06.io.spdk:cnode8385", 00:09:55.061 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:55.061 "method": "nvmf_create_subsystem", 00:09:55.061 "req_id": 1 00:09:55.061 } 00:09:55.061 Got JSON-RPC error response 00:09:55.061 response: 00:09:55.061 { 00:09:55.061 "code": -32602, 00:09:55.061 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:55.061 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:55.061 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:55.061 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4307 00:09:55.061 [2024-06-10 11:58:44.566991] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4307: invalid model number 'SPDK_Controller' 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:55.320 { 00:09:55.320 "nqn": "nqn.2016-06.io.spdk:cnode4307", 00:09:55.320 "model_number": "SPDK_Controller\u001f", 00:09:55.320 "method": "nvmf_create_subsystem", 00:09:55.320 "req_id": 1 00:09:55.320 } 00:09:55.320 Got JSON-RPC error response 00:09:55.320 response: 00:09:55.320 { 00:09:55.320 "code": -32602, 00:09:55.320 "message": "Invalid MN SPDK_Controller\u001f" 00:09:55.320 }' 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:55.320 { 00:09:55.320 "nqn": "nqn.2016-06.io.spdk:cnode4307", 00:09:55.320 "model_number": "SPDK_Controller\u001f", 00:09:55.320 "method": "nvmf_create_subsystem", 00:09:55.320 "req_id": 1 00:09:55.320 } 00:09:55.320 Got JSON-RPC error response 00:09:55.320 response: 00:09:55.320 { 00:09:55.320 "code": -32602, 00:09:55.320 "message": "Invalid MN SPDK_Controller\u001f" 00:09:55.320 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:55.320 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '4"v-nPMHfPHxe*Jw|^HDx' 00:09:55.321 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '4"v-nPMHfPHxe*Jw|^HDx' nqn.2016-06.io.spdk:cnode32063 00:09:55.580 [2024-06-10 11:58:44.924205] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32063: invalid serial number '4"v-nPMHfPHxe*Jw|^HDx' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:55.580 { 00:09:55.580 "nqn": "nqn.2016-06.io.spdk:cnode32063", 00:09:55.580 "serial_number": "4\"v-nPMHfPHxe*Jw|^HDx", 00:09:55.580 "method": "nvmf_create_subsystem", 00:09:55.580 "req_id": 1 00:09:55.580 } 00:09:55.580 Got JSON-RPC error response 00:09:55.580 response: 00:09:55.580 { 00:09:55.580 "code": -32602, 00:09:55.580 "message": "Invalid SN 4\"v-nPMHfPHxe*Jw|^HDx" 00:09:55.580 }' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:55.580 { 00:09:55.580 "nqn": "nqn.2016-06.io.spdk:cnode32063", 00:09:55.580 "serial_number": "4\"v-nPMHfPHxe*Jw|^HDx", 00:09:55.580 "method": "nvmf_create_subsystem", 00:09:55.580 "req_id": 1 00:09:55.580 } 00:09:55.580 Got JSON-RPC error response 00:09:55.580 response: 00:09:55.580 { 00:09:55.580 "code": -32602, 00:09:55.580 "message": "Invalid SN 4\"v-nPMHfPHxe*Jw|^HDx" 00:09:55.580 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:55.580 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.581 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.840 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'y`<}w\Y"QkDYu<[eAl*3uLV[pVsp2D]MX&EJJ>+' 00:09:55.841 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'y`<}w\Y"QkDYu<[eAl*3uLV[pVsp2D]MX&EJJ>+' nqn.2016-06.io.spdk:cnode2112 00:09:56.100 [2024-06-10 11:58:45.437938] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2112: invalid model number 'y`<}w\Y"QkDYu<[eAl*3uLV[pVsp2D]MX&EJJ>+' 00:09:56.100 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:56.100 { 00:09:56.100 "nqn": "nqn.2016-06.io.spdk:cnode2112", 00:09:56.100 "model_number": "y`<}w\\Y\"QkDYu\u007f<[eAl*3uL\u007fV[pVsp2D]MX&EJJ>+", 00:09:56.100 "method": "nvmf_create_subsystem", 00:09:56.100 "req_id": 1 00:09:56.100 } 00:09:56.100 Got JSON-RPC error response 00:09:56.100 response: 00:09:56.100 { 00:09:56.100 "code": -32602, 00:09:56.100 "message": "Invalid MN y`<}w\\Y\"QkDYu\u007f<[eAl*3uL\u007fV[pVsp2D]MX&EJJ>+" 00:09:56.100 }' 00:09:56.100 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:56.100 { 00:09:56.100 "nqn": "nqn.2016-06.io.spdk:cnode2112", 00:09:56.100 "model_number": "y`<}w\\Y\"QkDYu\u007f<[eAl*3uL\u007fV[pVsp2D]MX&EJJ>+", 00:09:56.100 "method": "nvmf_create_subsystem", 00:09:56.100 "req_id": 1 00:09:56.100 } 00:09:56.100 Got JSON-RPC error response 00:09:56.100 response: 00:09:56.100 { 00:09:56.100 "code": -32602, 00:09:56.100 "message": "Invalid MN y`<}w\\Y\"QkDYu\u007f<[eAl*3uL\u007fV[pVsp2D]MX&EJJ>+" 00:09:56.100 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:56.100 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:56.100 [2024-06-10 11:58:45.618615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.360 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:56.360 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:56.360 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:56.360 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:56.360 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:56.360 11:58:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:56.619 [2024-06-10 11:58:45.995900] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:56.619 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:56.619 { 00:09:56.619 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:56.619 "listen_address": { 00:09:56.619 "trtype": "tcp", 00:09:56.619 "traddr": "", 00:09:56.619 "trsvcid": "4421" 00:09:56.619 }, 00:09:56.619 "method": "nvmf_subsystem_remove_listener", 00:09:56.619 "req_id": 1 00:09:56.619 } 00:09:56.619 Got JSON-RPC error response 00:09:56.619 response: 00:09:56.619 { 00:09:56.619 "code": -32602, 00:09:56.619 "message": "Invalid parameters" 00:09:56.619 }' 00:09:56.619 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:56.619 { 00:09:56.619 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:56.619 "listen_address": { 00:09:56.619 "trtype": "tcp", 00:09:56.619 "traddr": "", 00:09:56.619 "trsvcid": "4421" 00:09:56.619 }, 00:09:56.619 "method": "nvmf_subsystem_remove_listener", 00:09:56.619 "req_id": 1 00:09:56.619 } 00:09:56.619 Got JSON-RPC error response 00:09:56.619 response: 00:09:56.619 { 00:09:56.619 "code": -32602, 00:09:56.619 "message": "Invalid parameters" 00:09:56.619 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:56.619 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9270 -i 0 00:09:56.878 [2024-06-10 11:58:46.172467] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9270: invalid cntlid range [0-65519] 00:09:56.878 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:56.878 { 00:09:56.878 "nqn": "nqn.2016-06.io.spdk:cnode9270", 00:09:56.878 "min_cntlid": 0, 00:09:56.878 "method": "nvmf_create_subsystem", 00:09:56.878 "req_id": 1 00:09:56.878 } 00:09:56.878 Got JSON-RPC error response 00:09:56.878 response: 00:09:56.878 { 00:09:56.878 "code": -32602, 00:09:56.878 "message": "Invalid cntlid range [0-65519]" 00:09:56.878 }' 00:09:56.878 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:56.878 { 00:09:56.878 "nqn": "nqn.2016-06.io.spdk:cnode9270", 00:09:56.878 "min_cntlid": 0, 00:09:56.878 "method": "nvmf_create_subsystem", 00:09:56.878 "req_id": 1 00:09:56.878 } 00:09:56.878 Got JSON-RPC error response 00:09:56.878 response: 00:09:56.878 { 00:09:56.878 "code": -32602, 00:09:56.878 "message": "Invalid cntlid range [0-65519]" 00:09:56.878 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:56.878 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11169 -i 65520 00:09:56.878 [2024-06-10 11:58:46.361180] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11169: invalid cntlid range [65520-65519] 00:09:56.878 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:56.878 { 00:09:56.878 "nqn": "nqn.2016-06.io.spdk:cnode11169", 00:09:56.878 "min_cntlid": 65520, 00:09:56.878 "method": "nvmf_create_subsystem", 00:09:56.878 "req_id": 1 00:09:56.878 } 00:09:56.878 Got JSON-RPC error response 00:09:56.878 response: 00:09:56.878 { 00:09:56.878 "code": -32602, 00:09:56.878 "message": "Invalid cntlid range [65520-65519]" 00:09:56.878 }' 00:09:56.878 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:56.878 { 00:09:56.878 "nqn": "nqn.2016-06.io.spdk:cnode11169", 00:09:56.878 "min_cntlid": 65520, 00:09:56.878 "method": "nvmf_create_subsystem", 00:09:56.878 "req_id": 1 00:09:56.878 } 00:09:56.878 Got JSON-RPC error response 00:09:56.878 response: 00:09:56.878 { 00:09:56.878 "code": -32602, 00:09:56.878 "message": "Invalid cntlid range [65520-65519]" 00:09:56.878 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:56.878 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26199 -I 0 00:09:57.137 [2024-06-10 11:58:46.553786] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26199: invalid cntlid range [1-0] 00:09:57.137 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:57.137 { 00:09:57.137 "nqn": "nqn.2016-06.io.spdk:cnode26199", 00:09:57.137 "max_cntlid": 0, 00:09:57.137 "method": "nvmf_create_subsystem", 00:09:57.137 "req_id": 1 00:09:57.137 } 00:09:57.137 Got JSON-RPC error response 00:09:57.137 response: 00:09:57.137 { 00:09:57.137 "code": -32602, 00:09:57.137 "message": "Invalid cntlid range [1-0]" 00:09:57.137 }' 00:09:57.137 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:57.137 { 00:09:57.137 "nqn": "nqn.2016-06.io.spdk:cnode26199", 00:09:57.137 "max_cntlid": 0, 00:09:57.137 "method": "nvmf_create_subsystem", 00:09:57.137 "req_id": 1 00:09:57.137 } 00:09:57.137 Got JSON-RPC error response 00:09:57.137 response: 00:09:57.137 { 00:09:57.137 "code": -32602, 00:09:57.137 "message": "Invalid cntlid range [1-0]" 00:09:57.137 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.137 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15129 -I 65520 00:09:57.396 [2024-06-10 11:58:46.746421] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15129: invalid cntlid range [1-65520] 00:09:57.396 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:57.396 { 00:09:57.396 "nqn": "nqn.2016-06.io.spdk:cnode15129", 00:09:57.396 "max_cntlid": 65520, 00:09:57.396 "method": "nvmf_create_subsystem", 00:09:57.396 "req_id": 1 00:09:57.396 } 00:09:57.396 Got JSON-RPC error response 00:09:57.396 response: 00:09:57.396 { 00:09:57.396 "code": -32602, 00:09:57.396 "message": "Invalid cntlid range [1-65520]" 00:09:57.396 }' 00:09:57.396 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:57.396 { 00:09:57.396 "nqn": "nqn.2016-06.io.spdk:cnode15129", 00:09:57.396 "max_cntlid": 65520, 00:09:57.396 "method": "nvmf_create_subsystem", 00:09:57.396 "req_id": 1 00:09:57.396 } 00:09:57.396 Got JSON-RPC error response 00:09:57.396 response: 00:09:57.396 { 00:09:57.396 "code": -32602, 00:09:57.396 "message": "Invalid cntlid range [1-65520]" 00:09:57.396 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.396 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1681 -i 6 -I 5 00:09:57.655 [2024-06-10 11:58:46.931085] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1681: invalid cntlid range [6-5] 00:09:57.655 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:57.655 { 00:09:57.655 "nqn": "nqn.2016-06.io.spdk:cnode1681", 00:09:57.655 "min_cntlid": 6, 00:09:57.655 "max_cntlid": 5, 00:09:57.655 "method": "nvmf_create_subsystem", 00:09:57.655 "req_id": 1 00:09:57.655 } 00:09:57.655 Got JSON-RPC error response 00:09:57.655 response: 00:09:57.655 { 00:09:57.655 "code": -32602, 00:09:57.655 "message": "Invalid cntlid range [6-5]" 00:09:57.655 }' 00:09:57.655 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:57.655 { 00:09:57.655 "nqn": "nqn.2016-06.io.spdk:cnode1681", 00:09:57.655 "min_cntlid": 6, 00:09:57.655 "max_cntlid": 5, 00:09:57.655 "method": "nvmf_create_subsystem", 00:09:57.655 "req_id": 1 00:09:57.655 } 00:09:57.655 Got JSON-RPC error response 00:09:57.655 response: 00:09:57.655 { 00:09:57.655 "code": -32602, 00:09:57.655 "message": "Invalid cntlid range [6-5]" 00:09:57.655 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.655 11:58:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:57.655 { 00:09:57.655 "name": "foobar", 00:09:57.655 "method": "nvmf_delete_target", 00:09:57.655 "req_id": 1 00:09:57.655 } 00:09:57.655 Got JSON-RPC error response 00:09:57.655 response: 00:09:57.655 { 00:09:57.655 "code": -32602, 00:09:57.655 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:57.655 }' 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:57.655 { 00:09:57.655 "name": "foobar", 00:09:57.655 "method": "nvmf_delete_target", 00:09:57.655 "req_id": 1 00:09:57.655 } 00:09:57.655 Got JSON-RPC error response 00:09:57.655 response: 00:09:57.655 { 00:09:57.655 "code": -32602, 00:09:57.655 "message": "The specified target doesn't exist, cannot delete it." 00:09:57.655 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.655 rmmod nvme_tcp 00:09:57.655 rmmod nvme_fabrics 00:09:57.655 rmmod nvme_keyring 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2093304 ']' 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2093304 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 2093304 ']' 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 2093304 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:57.655 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2093304 00:09:57.914 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:57.914 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:57.914 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2093304' 00:09:57.914 killing process with pid 2093304 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 2093304 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 2093304 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.915 11:58:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.453 11:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.453 00:10:00.453 real 0m12.947s 00:10:00.453 user 0m19.973s 00:10:00.453 sys 0m6.156s 00:10:00.453 11:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:00.453 11:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:00.453 ************************************ 00:10:00.453 END TEST nvmf_invalid 00:10:00.453 ************************************ 00:10:00.453 11:58:49 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:00.453 11:58:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:00.453 11:58:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:00.453 11:58:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.453 ************************************ 00:10:00.453 START TEST nvmf_abort 00:10:00.453 ************************************ 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:00.454 * Looking for test storage... 00:10:00.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.454 11:58:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:07.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:07.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:07.119 Found net devices under 0000:af:00.0: cvl_0_0 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:07.119 Found net devices under 0000:af:00.1: cvl_0_1 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.119 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:07.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:10:07.378 00:10:07.378 --- 10.0.0.2 ping statistics --- 00:10:07.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.378 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:10:07.378 00:10:07.378 --- 10.0.0.1 ping statistics --- 00:10:07.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.378 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2097970 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2097970 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2097970 ']' 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.378 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:07.379 11:58:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.637 [2024-06-10 11:58:56.917726] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:07.637 [2024-06-10 11:58:56.917773] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.637 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.637 [2024-06-10 11:58:56.991004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.637 [2024-06-10 11:58:57.058830] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.637 [2024-06-10 11:58:57.058872] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.637 [2024-06-10 11:58:57.058882] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.637 [2024-06-10 11:58:57.058890] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.637 [2024-06-10 11:58:57.058897] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.638 [2024-06-10 11:58:57.059003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.638 [2024-06-10 11:58:57.059104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.638 [2024-06-10 11:58:57.059106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.204 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:08.204 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.463 [2024-06-10 11:58:57.774684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.463 Malloc0 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.463 Delay0 00:10:08.463 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.464 [2024-06-10 11:58:57.855148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.464 11:58:57 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:08.464 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.464 [2024-06-10 11:58:57.930986] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:10.997 Initializing NVMe Controllers 00:10:10.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:10.997 controller IO queue size 128 less than required 00:10:10.997 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:10.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:10.997 Initialization complete. Launching workers. 00:10:10.997 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43621 00:10:10.997 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43682, failed to submit 62 00:10:10.997 success 43625, unsuccess 57, failed 0 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.997 11:58:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.997 rmmod nvme_tcp 00:10:10.997 rmmod nvme_fabrics 00:10:10.997 rmmod nvme_keyring 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2097970 ']' 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2097970 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2097970 ']' 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2097970 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2097970 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2097970' 00:10:10.997 killing process with pid 2097970 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2097970 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2097970 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.997 11:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.901 11:59:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:12.901 00:10:12.901 real 0m12.852s 00:10:12.901 user 0m13.041s 00:10:12.901 sys 0m6.676s 00:10:12.901 11:59:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:12.901 11:59:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:12.901 ************************************ 00:10:12.901 END TEST nvmf_abort 00:10:12.901 ************************************ 00:10:13.159 11:59:02 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:13.159 11:59:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:13.159 11:59:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:13.159 11:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.159 ************************************ 00:10:13.159 START TEST nvmf_ns_hotplug_stress 00:10:13.159 ************************************ 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:13.159 * Looking for test storage... 00:10:13.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.159 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.160 11:59:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:21.280 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:21.280 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.280 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:21.281 Found net devices under 0000:af:00.0: cvl_0_0 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:21.281 Found net devices under 0000:af:00.1: cvl_0_1 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:10:21.281 00:10:21.281 --- 10.0.0.2 ping statistics --- 00:10:21.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.281 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:10:21.281 00:10:21.281 --- 10.0.0.1 ping statistics --- 00:10:21.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.281 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2102227 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2102227 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2102227 ']' 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:21.281 11:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.281 [2024-06-10 11:59:09.646997] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:10:21.281 [2024-06-10 11:59:09.647042] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.281 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.281 [2024-06-10 11:59:09.719366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.281 [2024-06-10 11:59:09.792722] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.281 [2024-06-10 11:59:09.792763] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.281 [2024-06-10 11:59:09.792772] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.281 [2024-06-10 11:59:09.792781] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.281 [2024-06-10 11:59:09.792789] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.281 [2024-06-10 11:59:09.792828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.281 [2024-06-10 11:59:09.792910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.281 [2024-06-10 11:59:09.792912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.281 [2024-06-10 11:59:10.656596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.281 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.540 11:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.540 [2024-06-10 11:59:11.010325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.540 11:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.798 11:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:22.056 Malloc0 00:10:22.056 11:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.315 Delay0 00:10:22.315 11:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.315 11:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:22.573 NULL1 00:10:22.573 11:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:22.830 11:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2102777 00:10:22.830 11:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:22.830 11:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:22.830 11:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.830 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.200 Read completed with error (sct=0, sc=11) 00:10:24.200 11:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.200 11:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:24.200 11:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:24.200 true 00:10:24.200 11:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:24.200 11:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.131 11:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.389 11:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:25.389 11:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:25.389 true 00:10:25.389 11:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:25.389 11:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.648 11:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.906 11:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:25.906 11:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:25.906 true 00:10:26.165 11:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:26.165 11:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.099 11:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.357 11:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:27.357 11:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:27.615 true 00:10:27.615 11:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:27.615 11:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.546 11:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.546 11:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:28.546 11:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:28.803 true 00:10:28.803 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:28.803 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.803 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.059 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:29.059 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:29.316 true 00:10:29.316 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:29.316 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.573 11:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.573 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:29.573 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:29.831 true 00:10:29.831 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:29.831 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.088 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.088 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:30.088 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:30.346 true 00:10:30.346 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:30.346 11:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.720 11:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.720 11:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:31.720 11:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:31.979 true 00:10:31.979 11:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:31.979 11:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.913 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.913 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:32.913 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:33.208 true 00:10:33.208 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:33.208 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.208 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.497 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:33.497 11:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:33.497 true 00:10:33.754 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:33.754 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.754 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.012 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:34.012 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:34.271 true 00:10:34.271 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:34.271 11:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.207 11:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.207 11:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:35.207 11:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:35.465 true 00:10:35.465 11:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:35.465 11:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.465 11:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.723 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:35.723 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:35.981 true 00:10:35.981 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:35.981 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.981 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.240 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:36.240 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:36.498 true 00:10:36.498 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:36.498 11:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.756 11:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.756 11:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:36.756 11:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:37.014 true 00:10:37.014 11:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:37.014 11:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.388 11:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.388 11:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:38.388 11:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:38.646 true 00:10:38.646 11:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:38.646 11:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.580 11:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.580 11:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:39.580 11:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:39.838 true 00:10:39.838 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:39.838 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.838 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.097 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:40.097 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:40.355 true 00:10:40.355 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:40.355 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.355 11:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.613 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:40.613 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:40.872 true 00:10:40.872 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:40.872 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.131 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.131 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:41.131 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:41.389 true 00:10:41.389 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:41.389 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.647 11:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.647 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:41.647 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:41.905 true 00:10:41.905 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:41.905 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.164 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.422 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:42.422 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:42.422 true 00:10:42.422 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:42.422 11:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.796 11:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.796 11:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:43.796 11:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:44.053 true 00:10:44.053 11:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:44.053 11:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.985 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.985 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:44.985 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:45.242 true 00:10:45.242 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:45.242 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.500 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.500 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:45.500 11:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:45.758 true 00:10:45.759 11:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:45.759 11:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.134 11:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.134 11:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:47.134 11:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:47.391 true 00:10:47.391 11:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:47.391 11:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.325 11:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.325 11:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:48.325 11:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:48.582 true 00:10:48.582 11:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:48.582 11:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.840 11:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.840 11:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:48.840 11:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:49.098 true 00:10:49.098 11:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:49.098 11:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.471 11:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.471 11:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:50.471 11:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:50.471 true 00:10:50.730 11:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:50.730 11:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.666 11:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.666 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:51.666 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:51.666 true 00:10:51.924 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:51.924 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.924 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.182 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:52.182 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:52.441 true 00:10:52.441 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:52.441 11:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.377 Initializing NVMe Controllers 00:10:53.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:53.377 Controller IO queue size 128, less than required. 00:10:53.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:53.377 Controller IO queue size 128, less than required. 00:10:53.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:53.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:53.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:53.377 Initialization complete. Launching workers. 00:10:53.378 ======================================================== 00:10:53.378 Latency(us) 00:10:53.378 Device Information : IOPS MiB/s Average min max 00:10:53.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1722.06 0.84 45327.79 1676.04 1037607.50 00:10:53.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16294.67 7.96 7855.17 1828.95 366229.91 00:10:53.378 ======================================================== 00:10:53.378 Total : 18016.73 8.80 11436.85 1676.04 1037607.50 00:10:53.378 00:10:53.635 11:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.635 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:53.635 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:53.893 true 00:10:53.893 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2102777 00:10:53.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2102777) - No such process 00:10:53.893 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2102777 00:10:53.893 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.152 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.152 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:54.152 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:54.152 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:54.152 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.152 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:54.411 null0 00:10:54.411 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.411 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.411 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:54.669 null1 00:10:54.670 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.670 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.670 11:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:54.670 null2 00:10:54.670 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.670 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.670 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:54.929 null3 00:10:54.929 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.929 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.929 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:55.263 null4 00:10:55.264 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.264 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.264 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:55.264 null5 00:10:55.264 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.264 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.264 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:55.531 null6 00:10:55.531 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.531 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.531 11:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:55.531 null7 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.531 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2108483 2108485 2108486 2108489 2108490 2108493 2108494 2108496 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.790 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.050 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.309 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.310 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.569 11:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.827 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.828 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.086 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.087 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.344 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.344 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.345 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.345 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.345 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.345 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.345 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.345 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.602 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.603 11:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.603 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.861 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.119 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.377 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.378 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.636 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.636 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.636 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.895 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.896 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.154 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.155 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.413 rmmod nvme_tcp 00:10:59.413 rmmod nvme_fabrics 00:10:59.413 rmmod nvme_keyring 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2102227 ']' 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2102227 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2102227 ']' 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2102227 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:59.413 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2102227 00:10:59.672 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:59.672 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:59.672 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2102227' 00:10:59.672 killing process with pid 2102227 00:10:59.672 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2102227 00:10:59.672 11:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2102227 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.672 11:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.209 11:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.209 00:11:02.209 real 0m48.776s 00:11:02.209 user 3m6.959s 00:11:02.209 sys 0m20.688s 00:11:02.209 11:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:02.209 11:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.209 ************************************ 00:11:02.209 END TEST nvmf_ns_hotplug_stress 00:11:02.209 ************************************ 00:11:02.209 11:59:51 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:02.209 11:59:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:02.209 11:59:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:02.209 11:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.209 ************************************ 00:11:02.209 START TEST nvmf_connect_stress 00:11:02.209 ************************************ 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:02.209 * Looking for test storage... 00:11:02.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.209 11:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.774 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:08.775 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:08.775 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:08.775 Found net devices under 0000:af:00.0: cvl_0_0 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:08.775 Found net devices under 0000:af:00.1: cvl_0_1 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.775 11:59:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.775 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.775 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.775 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:11:08.775 00:11:08.775 --- 10.0.0.2 ping statistics --- 00:11:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.775 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:08.775 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:11:08.775 00:11:08.775 --- 10.0.0.1 ping statistics --- 00:11:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.775 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2113127 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2113127 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2113127 ']' 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:08.776 11:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.776 [2024-06-10 11:59:58.199300] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:11:08.776 [2024-06-10 11:59:58.199348] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.776 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.776 [2024-06-10 11:59:58.274066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.033 [2024-06-10 11:59:58.347705] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.033 [2024-06-10 11:59:58.347742] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.033 [2024-06-10 11:59:58.347751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.033 [2024-06-10 11:59:58.347759] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.033 [2024-06-10 11:59:58.347766] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.033 [2024-06-10 11:59:58.347870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.033 [2024-06-10 11:59:58.347973] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.033 [2024-06-10 11:59:58.347974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.597 [2024-06-10 11:59:59.055932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.597 [2024-06-10 11:59:59.083570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.597 NULL1 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2113165 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.597 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.854 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.111 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.111 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:10.111 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.111 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.111 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.368 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.368 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:10.368 11:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.368 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.368 11:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.933 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.933 12:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:10.933 12:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.933 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.933 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.190 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.190 12:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:11.190 12:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.190 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.190 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.447 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.447 12:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:11.447 12:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.447 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.447 12:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.703 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.703 12:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:11.703 12:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.703 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.703 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.968 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.969 12:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:11.969 12:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.969 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.969 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.544 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.544 12:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:12.544 12:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.544 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.544 12:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.811 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.811 12:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:12.811 12:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.811 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.811 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.068 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.068 12:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:13.068 12:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.068 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.068 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.325 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.326 12:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:13.326 12:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.326 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.326 12:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.583 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.583 12:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:13.583 12:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.583 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.840 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.097 12:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:14.097 12:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.097 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.097 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.354 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.354 12:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:14.354 12:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.354 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.354 12:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.611 12:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:14.611 12:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.611 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.611 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.175 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.175 12:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:15.175 12:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.175 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.175 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.432 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.432 12:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:15.432 12:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.432 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.432 12:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.689 12:00:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:15.689 12:00:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.689 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.689 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.946 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.946 12:00:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:15.946 12:00:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.946 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.946 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.202 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.202 12:00:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:16.202 12:00:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.202 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.202 12:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.767 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.767 12:00:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:16.767 12:00:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.767 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.767 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.024 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.024 12:00:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:17.024 12:00:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.024 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.024 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.281 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.281 12:00:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:17.281 12:00:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.281 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.281 12:00:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.539 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.539 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:17.539 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.539 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.539 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.104 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.105 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:18.105 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.105 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.105 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.363 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.363 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:18.363 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.363 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.363 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.622 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.622 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:18.622 12:00:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.622 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.622 12:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.880 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.880 12:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:18.880 12:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.880 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.880 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.138 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.138 12:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:19.138 12:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.138 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:19.138 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.455 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.455 12:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:19.455 12:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.455 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:19.455 12:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.723 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2113165 00:11:19.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2113165) - No such process 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2113165 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.982 rmmod nvme_tcp 00:11:19.982 rmmod nvme_fabrics 00:11:19.982 rmmod nvme_keyring 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2113127 ']' 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2113127 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2113127 ']' 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2113127 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2113127 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2113127' 00:11:19.982 killing process with pid 2113127 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2113127 00:11:19.982 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2113127 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.241 12:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.778 12:00:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.778 00:11:22.778 real 0m20.383s 00:11:22.778 user 0m41.020s 00:11:22.778 sys 0m9.771s 00:11:22.778 12:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:22.778 12:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.778 ************************************ 00:11:22.778 END TEST nvmf_connect_stress 00:11:22.778 ************************************ 00:11:22.778 12:00:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:22.778 12:00:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:22.778 12:00:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:22.778 12:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.778 ************************************ 00:11:22.778 START TEST nvmf_fused_ordering 00:11:22.778 ************************************ 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:22.778 * Looking for test storage... 00:11:22.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.778 12:00:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.779 12:00:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.342 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:29.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:29.343 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:29.343 Found net devices under 0000:af:00.0: cvl_0_0 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:29.343 Found net devices under 0000:af:00.1: cvl_0_1 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:11:29.343 00:11:29.343 --- 10.0.0.2 ping statistics --- 00:11:29.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.343 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:11:29.343 00:11:29.343 --- 10.0.0.1 ping statistics --- 00:11:29.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.343 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2119281 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2119281 00:11:29.343 12:00:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:29.344 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2119281 ']' 00:11:29.344 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.344 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:29.344 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.344 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:29.344 12:00:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.344 [2024-06-10 12:00:18.778267] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:11:29.344 [2024-06-10 12:00:18.778317] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.344 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.344 [2024-06-10 12:00:18.853517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.602 [2024-06-10 12:00:18.926744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.602 [2024-06-10 12:00:18.926780] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.602 [2024-06-10 12:00:18.926789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.602 [2024-06-10 12:00:18.926798] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.602 [2024-06-10 12:00:18.926820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.602 [2024-06-10 12:00:18.926844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.166 [2024-06-10 12:00:19.625144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.166 [2024-06-10 12:00:19.645311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:30.166 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.167 NULL1 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:30.167 12:00:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:30.424 [2024-06-10 12:00:19.694521] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:11:30.424 [2024-06-10 12:00:19.694555] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119560 ] 00:11:30.424 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.682 Attached to nqn.2016-06.io.spdk:cnode1 00:11:30.682 Namespace ID: 1 size: 1GB 00:11:30.682 fused_ordering(0) 00:11:30.682 fused_ordering(1) 00:11:30.682 fused_ordering(2) 00:11:30.682 fused_ordering(3) 00:11:30.682 fused_ordering(4) 00:11:30.682 fused_ordering(5) 00:11:30.682 fused_ordering(6) 00:11:30.682 fused_ordering(7) 00:11:30.682 fused_ordering(8) 00:11:30.682 fused_ordering(9) 00:11:30.682 fused_ordering(10) 00:11:30.682 fused_ordering(11) 00:11:30.682 fused_ordering(12) 00:11:30.682 fused_ordering(13) 00:11:30.682 fused_ordering(14) 00:11:30.682 fused_ordering(15) 00:11:30.682 fused_ordering(16) 00:11:30.682 fused_ordering(17) 00:11:30.682 fused_ordering(18) 00:11:30.682 fused_ordering(19) 00:11:30.682 fused_ordering(20) 00:11:30.682 fused_ordering(21) 00:11:30.682 fused_ordering(22) 00:11:30.682 fused_ordering(23) 00:11:30.682 fused_ordering(24) 00:11:30.682 fused_ordering(25) 00:11:30.682 fused_ordering(26) 00:11:30.682 fused_ordering(27) 00:11:30.682 fused_ordering(28) 00:11:30.682 fused_ordering(29) 00:11:30.682 fused_ordering(30) 00:11:30.682 fused_ordering(31) 00:11:30.682 fused_ordering(32) 00:11:30.682 fused_ordering(33) 00:11:30.682 fused_ordering(34) 00:11:30.682 fused_ordering(35) 00:11:30.682 fused_ordering(36) 00:11:30.682 fused_ordering(37) 00:11:30.682 fused_ordering(38) 00:11:30.682 fused_ordering(39) 00:11:30.682 fused_ordering(40) 00:11:30.682 fused_ordering(41) 00:11:30.682 fused_ordering(42) 00:11:30.682 fused_ordering(43) 00:11:30.682 fused_ordering(44) 00:11:30.682 fused_ordering(45) 00:11:30.682 fused_ordering(46) 00:11:30.682 fused_ordering(47) 00:11:30.682 fused_ordering(48) 00:11:30.682 fused_ordering(49) 00:11:30.682 fused_ordering(50) 00:11:30.682 fused_ordering(51) 00:11:30.682 fused_ordering(52) 00:11:30.682 fused_ordering(53) 00:11:30.682 fused_ordering(54) 00:11:30.682 fused_ordering(55) 00:11:30.682 fused_ordering(56) 00:11:30.682 fused_ordering(57) 00:11:30.682 fused_ordering(58) 00:11:30.682 fused_ordering(59) 00:11:30.682 fused_ordering(60) 00:11:30.682 fused_ordering(61) 00:11:30.682 fused_ordering(62) 00:11:30.682 fused_ordering(63) 00:11:30.682 fused_ordering(64) 00:11:30.682 fused_ordering(65) 00:11:30.682 fused_ordering(66) 00:11:30.682 fused_ordering(67) 00:11:30.683 fused_ordering(68) 00:11:30.683 fused_ordering(69) 00:11:30.683 fused_ordering(70) 00:11:30.683 fused_ordering(71) 00:11:30.683 fused_ordering(72) 00:11:30.683 fused_ordering(73) 00:11:30.683 fused_ordering(74) 00:11:30.683 fused_ordering(75) 00:11:30.683 fused_ordering(76) 00:11:30.683 fused_ordering(77) 00:11:30.683 fused_ordering(78) 00:11:30.683 fused_ordering(79) 00:11:30.683 fused_ordering(80) 00:11:30.683 fused_ordering(81) 00:11:30.683 fused_ordering(82) 00:11:30.683 fused_ordering(83) 00:11:30.683 fused_ordering(84) 00:11:30.683 fused_ordering(85) 00:11:30.683 fused_ordering(86) 00:11:30.683 fused_ordering(87) 00:11:30.683 fused_ordering(88) 00:11:30.683 fused_ordering(89) 00:11:30.683 fused_ordering(90) 00:11:30.683 fused_ordering(91) 00:11:30.683 fused_ordering(92) 00:11:30.683 fused_ordering(93) 00:11:30.683 fused_ordering(94) 00:11:30.683 fused_ordering(95) 00:11:30.683 fused_ordering(96) 00:11:30.683 fused_ordering(97) 00:11:30.683 fused_ordering(98) 00:11:30.683 fused_ordering(99) 00:11:30.683 fused_ordering(100) 00:11:30.683 fused_ordering(101) 00:11:30.683 fused_ordering(102) 00:11:30.683 fused_ordering(103) 00:11:30.683 fused_ordering(104) 00:11:30.683 fused_ordering(105) 00:11:30.683 fused_ordering(106) 00:11:30.683 fused_ordering(107) 00:11:30.683 fused_ordering(108) 00:11:30.683 fused_ordering(109) 00:11:30.683 fused_ordering(110) 00:11:30.683 fused_ordering(111) 00:11:30.683 fused_ordering(112) 00:11:30.683 fused_ordering(113) 00:11:30.683 fused_ordering(114) 00:11:30.683 fused_ordering(115) 00:11:30.683 fused_ordering(116) 00:11:30.683 fused_ordering(117) 00:11:30.683 fused_ordering(118) 00:11:30.683 fused_ordering(119) 00:11:30.683 fused_ordering(120) 00:11:30.683 fused_ordering(121) 00:11:30.683 fused_ordering(122) 00:11:30.683 fused_ordering(123) 00:11:30.683 fused_ordering(124) 00:11:30.683 fused_ordering(125) 00:11:30.683 fused_ordering(126) 00:11:30.683 fused_ordering(127) 00:11:30.683 fused_ordering(128) 00:11:30.683 fused_ordering(129) 00:11:30.683 fused_ordering(130) 00:11:30.683 fused_ordering(131) 00:11:30.683 fused_ordering(132) 00:11:30.683 fused_ordering(133) 00:11:30.683 fused_ordering(134) 00:11:30.683 fused_ordering(135) 00:11:30.683 fused_ordering(136) 00:11:30.683 fused_ordering(137) 00:11:30.683 fused_ordering(138) 00:11:30.683 fused_ordering(139) 00:11:30.683 fused_ordering(140) 00:11:30.683 fused_ordering(141) 00:11:30.683 fused_ordering(142) 00:11:30.683 fused_ordering(143) 00:11:30.683 fused_ordering(144) 00:11:30.683 fused_ordering(145) 00:11:30.683 fused_ordering(146) 00:11:30.683 fused_ordering(147) 00:11:30.683 fused_ordering(148) 00:11:30.683 fused_ordering(149) 00:11:30.683 fused_ordering(150) 00:11:30.683 fused_ordering(151) 00:11:30.683 fused_ordering(152) 00:11:30.683 fused_ordering(153) 00:11:30.683 fused_ordering(154) 00:11:30.683 fused_ordering(155) 00:11:30.683 fused_ordering(156) 00:11:30.683 fused_ordering(157) 00:11:30.683 fused_ordering(158) 00:11:30.683 fused_ordering(159) 00:11:30.683 fused_ordering(160) 00:11:30.683 fused_ordering(161) 00:11:30.683 fused_ordering(162) 00:11:30.683 fused_ordering(163) 00:11:30.683 fused_ordering(164) 00:11:30.683 fused_ordering(165) 00:11:30.683 fused_ordering(166) 00:11:30.683 fused_ordering(167) 00:11:30.683 fused_ordering(168) 00:11:30.683 fused_ordering(169) 00:11:30.683 fused_ordering(170) 00:11:30.683 fused_ordering(171) 00:11:30.683 fused_ordering(172) 00:11:30.683 fused_ordering(173) 00:11:30.683 fused_ordering(174) 00:11:30.683 fused_ordering(175) 00:11:30.683 fused_ordering(176) 00:11:30.683 fused_ordering(177) 00:11:30.683 fused_ordering(178) 00:11:30.683 fused_ordering(179) 00:11:30.683 fused_ordering(180) 00:11:30.683 fused_ordering(181) 00:11:30.683 fused_ordering(182) 00:11:30.683 fused_ordering(183) 00:11:30.683 fused_ordering(184) 00:11:30.683 fused_ordering(185) 00:11:30.683 fused_ordering(186) 00:11:30.683 fused_ordering(187) 00:11:30.683 fused_ordering(188) 00:11:30.683 fused_ordering(189) 00:11:30.683 fused_ordering(190) 00:11:30.683 fused_ordering(191) 00:11:30.683 fused_ordering(192) 00:11:30.683 fused_ordering(193) 00:11:30.683 fused_ordering(194) 00:11:30.683 fused_ordering(195) 00:11:30.683 fused_ordering(196) 00:11:30.683 fused_ordering(197) 00:11:30.683 fused_ordering(198) 00:11:30.683 fused_ordering(199) 00:11:30.683 fused_ordering(200) 00:11:30.683 fused_ordering(201) 00:11:30.683 fused_ordering(202) 00:11:30.683 fused_ordering(203) 00:11:30.683 fused_ordering(204) 00:11:30.683 fused_ordering(205) 00:11:30.941 fused_ordering(206) 00:11:30.941 fused_ordering(207) 00:11:30.941 fused_ordering(208) 00:11:30.941 fused_ordering(209) 00:11:30.941 fused_ordering(210) 00:11:30.941 fused_ordering(211) 00:11:30.941 fused_ordering(212) 00:11:30.941 fused_ordering(213) 00:11:30.941 fused_ordering(214) 00:11:30.941 fused_ordering(215) 00:11:30.941 fused_ordering(216) 00:11:30.941 fused_ordering(217) 00:11:30.941 fused_ordering(218) 00:11:30.941 fused_ordering(219) 00:11:30.941 fused_ordering(220) 00:11:30.941 fused_ordering(221) 00:11:30.941 fused_ordering(222) 00:11:30.941 fused_ordering(223) 00:11:30.941 fused_ordering(224) 00:11:30.941 fused_ordering(225) 00:11:30.941 fused_ordering(226) 00:11:30.941 fused_ordering(227) 00:11:30.941 fused_ordering(228) 00:11:30.941 fused_ordering(229) 00:11:30.941 fused_ordering(230) 00:11:30.941 fused_ordering(231) 00:11:30.941 fused_ordering(232) 00:11:30.941 fused_ordering(233) 00:11:30.941 fused_ordering(234) 00:11:30.941 fused_ordering(235) 00:11:30.941 fused_ordering(236) 00:11:30.941 fused_ordering(237) 00:11:30.941 fused_ordering(238) 00:11:30.941 fused_ordering(239) 00:11:30.941 fused_ordering(240) 00:11:30.941 fused_ordering(241) 00:11:30.941 fused_ordering(242) 00:11:30.941 fused_ordering(243) 00:11:30.941 fused_ordering(244) 00:11:30.941 fused_ordering(245) 00:11:30.941 fused_ordering(246) 00:11:30.941 fused_ordering(247) 00:11:30.941 fused_ordering(248) 00:11:30.941 fused_ordering(249) 00:11:30.941 fused_ordering(250) 00:11:30.941 fused_ordering(251) 00:11:30.941 fused_ordering(252) 00:11:30.941 fused_ordering(253) 00:11:30.941 fused_ordering(254) 00:11:30.941 fused_ordering(255) 00:11:30.941 fused_ordering(256) 00:11:30.941 fused_ordering(257) 00:11:30.941 fused_ordering(258) 00:11:30.941 fused_ordering(259) 00:11:30.941 fused_ordering(260) 00:11:30.941 fused_ordering(261) 00:11:30.941 fused_ordering(262) 00:11:30.941 fused_ordering(263) 00:11:30.941 fused_ordering(264) 00:11:30.941 fused_ordering(265) 00:11:30.941 fused_ordering(266) 00:11:30.941 fused_ordering(267) 00:11:30.941 fused_ordering(268) 00:11:30.941 fused_ordering(269) 00:11:30.941 fused_ordering(270) 00:11:30.941 fused_ordering(271) 00:11:30.941 fused_ordering(272) 00:11:30.941 fused_ordering(273) 00:11:30.941 fused_ordering(274) 00:11:30.941 fused_ordering(275) 00:11:30.941 fused_ordering(276) 00:11:30.941 fused_ordering(277) 00:11:30.941 fused_ordering(278) 00:11:30.941 fused_ordering(279) 00:11:30.941 fused_ordering(280) 00:11:30.941 fused_ordering(281) 00:11:30.941 fused_ordering(282) 00:11:30.941 fused_ordering(283) 00:11:30.941 fused_ordering(284) 00:11:30.941 fused_ordering(285) 00:11:30.941 fused_ordering(286) 00:11:30.941 fused_ordering(287) 00:11:30.941 fused_ordering(288) 00:11:30.941 fused_ordering(289) 00:11:30.941 fused_ordering(290) 00:11:30.941 fused_ordering(291) 00:11:30.941 fused_ordering(292) 00:11:30.941 fused_ordering(293) 00:11:30.941 fused_ordering(294) 00:11:30.941 fused_ordering(295) 00:11:30.941 fused_ordering(296) 00:11:30.941 fused_ordering(297) 00:11:30.941 fused_ordering(298) 00:11:30.941 fused_ordering(299) 00:11:30.941 fused_ordering(300) 00:11:30.941 fused_ordering(301) 00:11:30.941 fused_ordering(302) 00:11:30.941 fused_ordering(303) 00:11:30.941 fused_ordering(304) 00:11:30.941 fused_ordering(305) 00:11:30.941 fused_ordering(306) 00:11:30.941 fused_ordering(307) 00:11:30.941 fused_ordering(308) 00:11:30.941 fused_ordering(309) 00:11:30.941 fused_ordering(310) 00:11:30.941 fused_ordering(311) 00:11:30.941 fused_ordering(312) 00:11:30.941 fused_ordering(313) 00:11:30.941 fused_ordering(314) 00:11:30.941 fused_ordering(315) 00:11:30.941 fused_ordering(316) 00:11:30.941 fused_ordering(317) 00:11:30.941 fused_ordering(318) 00:11:30.941 fused_ordering(319) 00:11:30.941 fused_ordering(320) 00:11:30.941 fused_ordering(321) 00:11:30.941 fused_ordering(322) 00:11:30.941 fused_ordering(323) 00:11:30.941 fused_ordering(324) 00:11:30.941 fused_ordering(325) 00:11:30.941 fused_ordering(326) 00:11:30.941 fused_ordering(327) 00:11:30.941 fused_ordering(328) 00:11:30.941 fused_ordering(329) 00:11:30.941 fused_ordering(330) 00:11:30.941 fused_ordering(331) 00:11:30.941 fused_ordering(332) 00:11:30.941 fused_ordering(333) 00:11:30.941 fused_ordering(334) 00:11:30.941 fused_ordering(335) 00:11:30.941 fused_ordering(336) 00:11:30.941 fused_ordering(337) 00:11:30.941 fused_ordering(338) 00:11:30.941 fused_ordering(339) 00:11:30.941 fused_ordering(340) 00:11:30.941 fused_ordering(341) 00:11:30.941 fused_ordering(342) 00:11:30.941 fused_ordering(343) 00:11:30.941 fused_ordering(344) 00:11:30.941 fused_ordering(345) 00:11:30.941 fused_ordering(346) 00:11:30.941 fused_ordering(347) 00:11:30.941 fused_ordering(348) 00:11:30.941 fused_ordering(349) 00:11:30.941 fused_ordering(350) 00:11:30.941 fused_ordering(351) 00:11:30.941 fused_ordering(352) 00:11:30.941 fused_ordering(353) 00:11:30.941 fused_ordering(354) 00:11:30.941 fused_ordering(355) 00:11:30.941 fused_ordering(356) 00:11:30.941 fused_ordering(357) 00:11:30.941 fused_ordering(358) 00:11:30.941 fused_ordering(359) 00:11:30.941 fused_ordering(360) 00:11:30.941 fused_ordering(361) 00:11:30.941 fused_ordering(362) 00:11:30.941 fused_ordering(363) 00:11:30.941 fused_ordering(364) 00:11:30.941 fused_ordering(365) 00:11:30.941 fused_ordering(366) 00:11:30.941 fused_ordering(367) 00:11:30.941 fused_ordering(368) 00:11:30.941 fused_ordering(369) 00:11:30.941 fused_ordering(370) 00:11:30.941 fused_ordering(371) 00:11:30.941 fused_ordering(372) 00:11:30.941 fused_ordering(373) 00:11:30.941 fused_ordering(374) 00:11:30.941 fused_ordering(375) 00:11:30.941 fused_ordering(376) 00:11:30.941 fused_ordering(377) 00:11:30.941 fused_ordering(378) 00:11:30.941 fused_ordering(379) 00:11:30.941 fused_ordering(380) 00:11:30.941 fused_ordering(381) 00:11:30.941 fused_ordering(382) 00:11:30.941 fused_ordering(383) 00:11:30.941 fused_ordering(384) 00:11:30.941 fused_ordering(385) 00:11:30.941 fused_ordering(386) 00:11:30.941 fused_ordering(387) 00:11:30.941 fused_ordering(388) 00:11:30.941 fused_ordering(389) 00:11:30.941 fused_ordering(390) 00:11:30.942 fused_ordering(391) 00:11:30.942 fused_ordering(392) 00:11:30.942 fused_ordering(393) 00:11:30.942 fused_ordering(394) 00:11:30.942 fused_ordering(395) 00:11:30.942 fused_ordering(396) 00:11:30.942 fused_ordering(397) 00:11:30.942 fused_ordering(398) 00:11:30.942 fused_ordering(399) 00:11:30.942 fused_ordering(400) 00:11:30.942 fused_ordering(401) 00:11:30.942 fused_ordering(402) 00:11:30.942 fused_ordering(403) 00:11:30.942 fused_ordering(404) 00:11:30.942 fused_ordering(405) 00:11:30.942 fused_ordering(406) 00:11:30.942 fused_ordering(407) 00:11:30.942 fused_ordering(408) 00:11:30.942 fused_ordering(409) 00:11:30.942 fused_ordering(410) 00:11:31.506 fused_ordering(411) 00:11:31.506 fused_ordering(412) 00:11:31.506 fused_ordering(413) 00:11:31.506 fused_ordering(414) 00:11:31.506 fused_ordering(415) 00:11:31.506 fused_ordering(416) 00:11:31.506 fused_ordering(417) 00:11:31.506 fused_ordering(418) 00:11:31.506 fused_ordering(419) 00:11:31.506 fused_ordering(420) 00:11:31.506 fused_ordering(421) 00:11:31.506 fused_ordering(422) 00:11:31.506 fused_ordering(423) 00:11:31.506 fused_ordering(424) 00:11:31.506 fused_ordering(425) 00:11:31.506 fused_ordering(426) 00:11:31.506 fused_ordering(427) 00:11:31.506 fused_ordering(428) 00:11:31.506 fused_ordering(429) 00:11:31.506 fused_ordering(430) 00:11:31.506 fused_ordering(431) 00:11:31.506 fused_ordering(432) 00:11:31.506 fused_ordering(433) 00:11:31.506 fused_ordering(434) 00:11:31.506 fused_ordering(435) 00:11:31.506 fused_ordering(436) 00:11:31.506 fused_ordering(437) 00:11:31.506 fused_ordering(438) 00:11:31.506 fused_ordering(439) 00:11:31.506 fused_ordering(440) 00:11:31.506 fused_ordering(441) 00:11:31.506 fused_ordering(442) 00:11:31.506 fused_ordering(443) 00:11:31.506 fused_ordering(444) 00:11:31.506 fused_ordering(445) 00:11:31.506 fused_ordering(446) 00:11:31.506 fused_ordering(447) 00:11:31.506 fused_ordering(448) 00:11:31.506 fused_ordering(449) 00:11:31.506 fused_ordering(450) 00:11:31.506 fused_ordering(451) 00:11:31.506 fused_ordering(452) 00:11:31.506 fused_ordering(453) 00:11:31.506 fused_ordering(454) 00:11:31.506 fused_ordering(455) 00:11:31.506 fused_ordering(456) 00:11:31.506 fused_ordering(457) 00:11:31.506 fused_ordering(458) 00:11:31.506 fused_ordering(459) 00:11:31.506 fused_ordering(460) 00:11:31.506 fused_ordering(461) 00:11:31.506 fused_ordering(462) 00:11:31.506 fused_ordering(463) 00:11:31.506 fused_ordering(464) 00:11:31.506 fused_ordering(465) 00:11:31.506 fused_ordering(466) 00:11:31.506 fused_ordering(467) 00:11:31.506 fused_ordering(468) 00:11:31.506 fused_ordering(469) 00:11:31.506 fused_ordering(470) 00:11:31.506 fused_ordering(471) 00:11:31.506 fused_ordering(472) 00:11:31.506 fused_ordering(473) 00:11:31.506 fused_ordering(474) 00:11:31.506 fused_ordering(475) 00:11:31.506 fused_ordering(476) 00:11:31.506 fused_ordering(477) 00:11:31.506 fused_ordering(478) 00:11:31.506 fused_ordering(479) 00:11:31.506 fused_ordering(480) 00:11:31.506 fused_ordering(481) 00:11:31.506 fused_ordering(482) 00:11:31.506 fused_ordering(483) 00:11:31.506 fused_ordering(484) 00:11:31.506 fused_ordering(485) 00:11:31.506 fused_ordering(486) 00:11:31.506 fused_ordering(487) 00:11:31.506 fused_ordering(488) 00:11:31.506 fused_ordering(489) 00:11:31.506 fused_ordering(490) 00:11:31.506 fused_ordering(491) 00:11:31.506 fused_ordering(492) 00:11:31.506 fused_ordering(493) 00:11:31.506 fused_ordering(494) 00:11:31.506 fused_ordering(495) 00:11:31.506 fused_ordering(496) 00:11:31.506 fused_ordering(497) 00:11:31.506 fused_ordering(498) 00:11:31.506 fused_ordering(499) 00:11:31.506 fused_ordering(500) 00:11:31.506 fused_ordering(501) 00:11:31.506 fused_ordering(502) 00:11:31.506 fused_ordering(503) 00:11:31.506 fused_ordering(504) 00:11:31.506 fused_ordering(505) 00:11:31.506 fused_ordering(506) 00:11:31.506 fused_ordering(507) 00:11:31.506 fused_ordering(508) 00:11:31.506 fused_ordering(509) 00:11:31.506 fused_ordering(510) 00:11:31.506 fused_ordering(511) 00:11:31.506 fused_ordering(512) 00:11:31.507 fused_ordering(513) 00:11:31.507 fused_ordering(514) 00:11:31.507 fused_ordering(515) 00:11:31.507 fused_ordering(516) 00:11:31.507 fused_ordering(517) 00:11:31.507 fused_ordering(518) 00:11:31.507 fused_ordering(519) 00:11:31.507 fused_ordering(520) 00:11:31.507 fused_ordering(521) 00:11:31.507 fused_ordering(522) 00:11:31.507 fused_ordering(523) 00:11:31.507 fused_ordering(524) 00:11:31.507 fused_ordering(525) 00:11:31.507 fused_ordering(526) 00:11:31.507 fused_ordering(527) 00:11:31.507 fused_ordering(528) 00:11:31.507 fused_ordering(529) 00:11:31.507 fused_ordering(530) 00:11:31.507 fused_ordering(531) 00:11:31.507 fused_ordering(532) 00:11:31.507 fused_ordering(533) 00:11:31.507 fused_ordering(534) 00:11:31.507 fused_ordering(535) 00:11:31.507 fused_ordering(536) 00:11:31.507 fused_ordering(537) 00:11:31.507 fused_ordering(538) 00:11:31.507 fused_ordering(539) 00:11:31.507 fused_ordering(540) 00:11:31.507 fused_ordering(541) 00:11:31.507 fused_ordering(542) 00:11:31.507 fused_ordering(543) 00:11:31.507 fused_ordering(544) 00:11:31.507 fused_ordering(545) 00:11:31.507 fused_ordering(546) 00:11:31.507 fused_ordering(547) 00:11:31.507 fused_ordering(548) 00:11:31.507 fused_ordering(549) 00:11:31.507 fused_ordering(550) 00:11:31.507 fused_ordering(551) 00:11:31.507 fused_ordering(552) 00:11:31.507 fused_ordering(553) 00:11:31.507 fused_ordering(554) 00:11:31.507 fused_ordering(555) 00:11:31.507 fused_ordering(556) 00:11:31.507 fused_ordering(557) 00:11:31.507 fused_ordering(558) 00:11:31.507 fused_ordering(559) 00:11:31.507 fused_ordering(560) 00:11:31.507 fused_ordering(561) 00:11:31.507 fused_ordering(562) 00:11:31.507 fused_ordering(563) 00:11:31.507 fused_ordering(564) 00:11:31.507 fused_ordering(565) 00:11:31.507 fused_ordering(566) 00:11:31.507 fused_ordering(567) 00:11:31.507 fused_ordering(568) 00:11:31.507 fused_ordering(569) 00:11:31.507 fused_ordering(570) 00:11:31.507 fused_ordering(571) 00:11:31.507 fused_ordering(572) 00:11:31.507 fused_ordering(573) 00:11:31.507 fused_ordering(574) 00:11:31.507 fused_ordering(575) 00:11:31.507 fused_ordering(576) 00:11:31.507 fused_ordering(577) 00:11:31.507 fused_ordering(578) 00:11:31.507 fused_ordering(579) 00:11:31.507 fused_ordering(580) 00:11:31.507 fused_ordering(581) 00:11:31.507 fused_ordering(582) 00:11:31.507 fused_ordering(583) 00:11:31.507 fused_ordering(584) 00:11:31.507 fused_ordering(585) 00:11:31.507 fused_ordering(586) 00:11:31.507 fused_ordering(587) 00:11:31.507 fused_ordering(588) 00:11:31.507 fused_ordering(589) 00:11:31.507 fused_ordering(590) 00:11:31.507 fused_ordering(591) 00:11:31.507 fused_ordering(592) 00:11:31.507 fused_ordering(593) 00:11:31.507 fused_ordering(594) 00:11:31.507 fused_ordering(595) 00:11:31.507 fused_ordering(596) 00:11:31.507 fused_ordering(597) 00:11:31.507 fused_ordering(598) 00:11:31.507 fused_ordering(599) 00:11:31.507 fused_ordering(600) 00:11:31.507 fused_ordering(601) 00:11:31.507 fused_ordering(602) 00:11:31.507 fused_ordering(603) 00:11:31.507 fused_ordering(604) 00:11:31.507 fused_ordering(605) 00:11:31.507 fused_ordering(606) 00:11:31.507 fused_ordering(607) 00:11:31.507 fused_ordering(608) 00:11:31.507 fused_ordering(609) 00:11:31.507 fused_ordering(610) 00:11:31.507 fused_ordering(611) 00:11:31.507 fused_ordering(612) 00:11:31.507 fused_ordering(613) 00:11:31.507 fused_ordering(614) 00:11:31.507 fused_ordering(615) 00:11:31.764 fused_ordering(616) 00:11:31.764 fused_ordering(617) 00:11:31.764 fused_ordering(618) 00:11:31.764 fused_ordering(619) 00:11:31.764 fused_ordering(620) 00:11:31.764 fused_ordering(621) 00:11:31.764 fused_ordering(622) 00:11:31.764 fused_ordering(623) 00:11:31.764 fused_ordering(624) 00:11:31.764 fused_ordering(625) 00:11:31.764 fused_ordering(626) 00:11:31.764 fused_ordering(627) 00:11:31.764 fused_ordering(628) 00:11:31.764 fused_ordering(629) 00:11:31.764 fused_ordering(630) 00:11:31.764 fused_ordering(631) 00:11:31.764 fused_ordering(632) 00:11:31.764 fused_ordering(633) 00:11:31.764 fused_ordering(634) 00:11:31.765 fused_ordering(635) 00:11:31.765 fused_ordering(636) 00:11:31.765 fused_ordering(637) 00:11:31.765 fused_ordering(638) 00:11:31.765 fused_ordering(639) 00:11:31.765 fused_ordering(640) 00:11:31.765 fused_ordering(641) 00:11:31.765 fused_ordering(642) 00:11:31.765 fused_ordering(643) 00:11:31.765 fused_ordering(644) 00:11:31.765 fused_ordering(645) 00:11:31.765 fused_ordering(646) 00:11:31.765 fused_ordering(647) 00:11:31.765 fused_ordering(648) 00:11:31.765 fused_ordering(649) 00:11:31.765 fused_ordering(650) 00:11:31.765 fused_ordering(651) 00:11:31.765 fused_ordering(652) 00:11:31.765 fused_ordering(653) 00:11:31.765 fused_ordering(654) 00:11:31.765 fused_ordering(655) 00:11:31.765 fused_ordering(656) 00:11:31.765 fused_ordering(657) 00:11:31.765 fused_ordering(658) 00:11:31.765 fused_ordering(659) 00:11:31.765 fused_ordering(660) 00:11:31.765 fused_ordering(661) 00:11:31.765 fused_ordering(662) 00:11:31.765 fused_ordering(663) 00:11:31.765 fused_ordering(664) 00:11:31.765 fused_ordering(665) 00:11:31.765 fused_ordering(666) 00:11:31.765 fused_ordering(667) 00:11:31.765 fused_ordering(668) 00:11:31.765 fused_ordering(669) 00:11:31.765 fused_ordering(670) 00:11:31.765 fused_ordering(671) 00:11:31.765 fused_ordering(672) 00:11:31.765 fused_ordering(673) 00:11:31.765 fused_ordering(674) 00:11:31.765 fused_ordering(675) 00:11:31.765 fused_ordering(676) 00:11:31.765 fused_ordering(677) 00:11:31.765 fused_ordering(678) 00:11:31.765 fused_ordering(679) 00:11:31.765 fused_ordering(680) 00:11:31.765 fused_ordering(681) 00:11:31.765 fused_ordering(682) 00:11:31.765 fused_ordering(683) 00:11:31.765 fused_ordering(684) 00:11:31.765 fused_ordering(685) 00:11:31.765 fused_ordering(686) 00:11:31.765 fused_ordering(687) 00:11:31.765 fused_ordering(688) 00:11:31.765 fused_ordering(689) 00:11:31.765 fused_ordering(690) 00:11:31.765 fused_ordering(691) 00:11:31.765 fused_ordering(692) 00:11:31.765 fused_ordering(693) 00:11:31.765 fused_ordering(694) 00:11:31.765 fused_ordering(695) 00:11:31.765 fused_ordering(696) 00:11:31.765 fused_ordering(697) 00:11:31.765 fused_ordering(698) 00:11:31.765 fused_ordering(699) 00:11:31.765 fused_ordering(700) 00:11:31.765 fused_ordering(701) 00:11:31.765 fused_ordering(702) 00:11:31.765 fused_ordering(703) 00:11:31.765 fused_ordering(704) 00:11:31.765 fused_ordering(705) 00:11:31.765 fused_ordering(706) 00:11:31.765 fused_ordering(707) 00:11:31.765 fused_ordering(708) 00:11:31.765 fused_ordering(709) 00:11:31.765 fused_ordering(710) 00:11:31.765 fused_ordering(711) 00:11:31.765 fused_ordering(712) 00:11:31.765 fused_ordering(713) 00:11:31.765 fused_ordering(714) 00:11:31.765 fused_ordering(715) 00:11:31.765 fused_ordering(716) 00:11:31.765 fused_ordering(717) 00:11:31.765 fused_ordering(718) 00:11:31.765 fused_ordering(719) 00:11:31.765 fused_ordering(720) 00:11:31.765 fused_ordering(721) 00:11:31.765 fused_ordering(722) 00:11:31.765 fused_ordering(723) 00:11:31.765 fused_ordering(724) 00:11:31.765 fused_ordering(725) 00:11:31.765 fused_ordering(726) 00:11:31.765 fused_ordering(727) 00:11:31.765 fused_ordering(728) 00:11:31.765 fused_ordering(729) 00:11:31.765 fused_ordering(730) 00:11:31.765 fused_ordering(731) 00:11:31.765 fused_ordering(732) 00:11:31.765 fused_ordering(733) 00:11:31.765 fused_ordering(734) 00:11:31.765 fused_ordering(735) 00:11:31.765 fused_ordering(736) 00:11:31.765 fused_ordering(737) 00:11:31.765 fused_ordering(738) 00:11:31.765 fused_ordering(739) 00:11:31.765 fused_ordering(740) 00:11:31.765 fused_ordering(741) 00:11:31.765 fused_ordering(742) 00:11:31.765 fused_ordering(743) 00:11:31.765 fused_ordering(744) 00:11:31.765 fused_ordering(745) 00:11:31.765 fused_ordering(746) 00:11:31.765 fused_ordering(747) 00:11:31.765 fused_ordering(748) 00:11:31.765 fused_ordering(749) 00:11:31.765 fused_ordering(750) 00:11:31.765 fused_ordering(751) 00:11:31.765 fused_ordering(752) 00:11:31.765 fused_ordering(753) 00:11:31.765 fused_ordering(754) 00:11:31.765 fused_ordering(755) 00:11:31.765 fused_ordering(756) 00:11:31.765 fused_ordering(757) 00:11:31.765 fused_ordering(758) 00:11:31.765 fused_ordering(759) 00:11:31.765 fused_ordering(760) 00:11:31.765 fused_ordering(761) 00:11:31.765 fused_ordering(762) 00:11:31.765 fused_ordering(763) 00:11:31.765 fused_ordering(764) 00:11:31.765 fused_ordering(765) 00:11:31.765 fused_ordering(766) 00:11:31.765 fused_ordering(767) 00:11:31.765 fused_ordering(768) 00:11:31.765 fused_ordering(769) 00:11:31.765 fused_ordering(770) 00:11:31.765 fused_ordering(771) 00:11:31.765 fused_ordering(772) 00:11:31.765 fused_ordering(773) 00:11:31.765 fused_ordering(774) 00:11:31.765 fused_ordering(775) 00:11:31.765 fused_ordering(776) 00:11:31.765 fused_ordering(777) 00:11:31.765 fused_ordering(778) 00:11:31.765 fused_ordering(779) 00:11:31.765 fused_ordering(780) 00:11:31.765 fused_ordering(781) 00:11:31.765 fused_ordering(782) 00:11:31.765 fused_ordering(783) 00:11:31.765 fused_ordering(784) 00:11:31.765 fused_ordering(785) 00:11:31.765 fused_ordering(786) 00:11:31.765 fused_ordering(787) 00:11:31.765 fused_ordering(788) 00:11:31.765 fused_ordering(789) 00:11:31.765 fused_ordering(790) 00:11:31.765 fused_ordering(791) 00:11:31.765 fused_ordering(792) 00:11:31.765 fused_ordering(793) 00:11:31.765 fused_ordering(794) 00:11:31.765 fused_ordering(795) 00:11:31.765 fused_ordering(796) 00:11:31.765 fused_ordering(797) 00:11:31.765 fused_ordering(798) 00:11:31.765 fused_ordering(799) 00:11:31.765 fused_ordering(800) 00:11:31.765 fused_ordering(801) 00:11:31.765 fused_ordering(802) 00:11:31.765 fused_ordering(803) 00:11:31.765 fused_ordering(804) 00:11:31.765 fused_ordering(805) 00:11:31.765 fused_ordering(806) 00:11:31.765 fused_ordering(807) 00:11:31.765 fused_ordering(808) 00:11:31.765 fused_ordering(809) 00:11:31.765 fused_ordering(810) 00:11:31.765 fused_ordering(811) 00:11:31.765 fused_ordering(812) 00:11:31.765 fused_ordering(813) 00:11:31.765 fused_ordering(814) 00:11:31.765 fused_ordering(815) 00:11:31.765 fused_ordering(816) 00:11:31.765 fused_ordering(817) 00:11:31.765 fused_ordering(818) 00:11:31.765 fused_ordering(819) 00:11:31.765 fused_ordering(820) 00:11:32.329 fused_ordering(821) 00:11:32.329 fused_ordering(822) 00:11:32.329 fused_ordering(823) 00:11:32.329 fused_ordering(824) 00:11:32.329 fused_ordering(825) 00:11:32.329 fused_ordering(826) 00:11:32.329 fused_ordering(827) 00:11:32.329 fused_ordering(828) 00:11:32.329 fused_ordering(829) 00:11:32.329 fused_ordering(830) 00:11:32.329 fused_ordering(831) 00:11:32.329 fused_ordering(832) 00:11:32.329 fused_ordering(833) 00:11:32.329 fused_ordering(834) 00:11:32.329 fused_ordering(835) 00:11:32.329 fused_ordering(836) 00:11:32.329 fused_ordering(837) 00:11:32.329 fused_ordering(838) 00:11:32.329 fused_ordering(839) 00:11:32.329 fused_ordering(840) 00:11:32.329 fused_ordering(841) 00:11:32.329 fused_ordering(842) 00:11:32.329 fused_ordering(843) 00:11:32.329 fused_ordering(844) 00:11:32.329 fused_ordering(845) 00:11:32.329 fused_ordering(846) 00:11:32.329 fused_ordering(847) 00:11:32.329 fused_ordering(848) 00:11:32.329 fused_ordering(849) 00:11:32.329 fused_ordering(850) 00:11:32.329 fused_ordering(851) 00:11:32.329 fused_ordering(852) 00:11:32.329 fused_ordering(853) 00:11:32.329 fused_ordering(854) 00:11:32.329 fused_ordering(855) 00:11:32.329 fused_ordering(856) 00:11:32.329 fused_ordering(857) 00:11:32.329 fused_ordering(858) 00:11:32.329 fused_ordering(859) 00:11:32.329 fused_ordering(860) 00:11:32.329 fused_ordering(861) 00:11:32.329 fused_ordering(862) 00:11:32.329 fused_ordering(863) 00:11:32.329 fused_ordering(864) 00:11:32.329 fused_ordering(865) 00:11:32.329 fused_ordering(866) 00:11:32.329 fused_ordering(867) 00:11:32.329 fused_ordering(868) 00:11:32.329 fused_ordering(869) 00:11:32.329 fused_ordering(870) 00:11:32.329 fused_ordering(871) 00:11:32.329 fused_ordering(872) 00:11:32.329 fused_ordering(873) 00:11:32.329 fused_ordering(874) 00:11:32.329 fused_ordering(875) 00:11:32.329 fused_ordering(876) 00:11:32.329 fused_ordering(877) 00:11:32.329 fused_ordering(878) 00:11:32.329 fused_ordering(879) 00:11:32.329 fused_ordering(880) 00:11:32.329 fused_ordering(881) 00:11:32.329 fused_ordering(882) 00:11:32.329 fused_ordering(883) 00:11:32.329 fused_ordering(884) 00:11:32.329 fused_ordering(885) 00:11:32.329 fused_ordering(886) 00:11:32.329 fused_ordering(887) 00:11:32.329 fused_ordering(888) 00:11:32.329 fused_ordering(889) 00:11:32.329 fused_ordering(890) 00:11:32.329 fused_ordering(891) 00:11:32.329 fused_ordering(892) 00:11:32.329 fused_ordering(893) 00:11:32.329 fused_ordering(894) 00:11:32.329 fused_ordering(895) 00:11:32.329 fused_ordering(896) 00:11:32.329 fused_ordering(897) 00:11:32.329 fused_ordering(898) 00:11:32.329 fused_ordering(899) 00:11:32.329 fused_ordering(900) 00:11:32.329 fused_ordering(901) 00:11:32.329 fused_ordering(902) 00:11:32.329 fused_ordering(903) 00:11:32.329 fused_ordering(904) 00:11:32.329 fused_ordering(905) 00:11:32.329 fused_ordering(906) 00:11:32.329 fused_ordering(907) 00:11:32.329 fused_ordering(908) 00:11:32.329 fused_ordering(909) 00:11:32.329 fused_ordering(910) 00:11:32.329 fused_ordering(911) 00:11:32.329 fused_ordering(912) 00:11:32.329 fused_ordering(913) 00:11:32.329 fused_ordering(914) 00:11:32.329 fused_ordering(915) 00:11:32.329 fused_ordering(916) 00:11:32.329 fused_ordering(917) 00:11:32.329 fused_ordering(918) 00:11:32.329 fused_ordering(919) 00:11:32.329 fused_ordering(920) 00:11:32.329 fused_ordering(921) 00:11:32.329 fused_ordering(922) 00:11:32.329 fused_ordering(923) 00:11:32.329 fused_ordering(924) 00:11:32.329 fused_ordering(925) 00:11:32.329 fused_ordering(926) 00:11:32.329 fused_ordering(927) 00:11:32.329 fused_ordering(928) 00:11:32.329 fused_ordering(929) 00:11:32.329 fused_ordering(930) 00:11:32.329 fused_ordering(931) 00:11:32.329 fused_ordering(932) 00:11:32.329 fused_ordering(933) 00:11:32.329 fused_ordering(934) 00:11:32.329 fused_ordering(935) 00:11:32.329 fused_ordering(936) 00:11:32.329 fused_ordering(937) 00:11:32.329 fused_ordering(938) 00:11:32.329 fused_ordering(939) 00:11:32.329 fused_ordering(940) 00:11:32.329 fused_ordering(941) 00:11:32.329 fused_ordering(942) 00:11:32.329 fused_ordering(943) 00:11:32.329 fused_ordering(944) 00:11:32.329 fused_ordering(945) 00:11:32.329 fused_ordering(946) 00:11:32.329 fused_ordering(947) 00:11:32.329 fused_ordering(948) 00:11:32.329 fused_ordering(949) 00:11:32.329 fused_ordering(950) 00:11:32.329 fused_ordering(951) 00:11:32.329 fused_ordering(952) 00:11:32.329 fused_ordering(953) 00:11:32.329 fused_ordering(954) 00:11:32.329 fused_ordering(955) 00:11:32.329 fused_ordering(956) 00:11:32.329 fused_ordering(957) 00:11:32.329 fused_ordering(958) 00:11:32.329 fused_ordering(959) 00:11:32.329 fused_ordering(960) 00:11:32.329 fused_ordering(961) 00:11:32.329 fused_ordering(962) 00:11:32.329 fused_ordering(963) 00:11:32.329 fused_ordering(964) 00:11:32.329 fused_ordering(965) 00:11:32.329 fused_ordering(966) 00:11:32.329 fused_ordering(967) 00:11:32.329 fused_ordering(968) 00:11:32.329 fused_ordering(969) 00:11:32.329 fused_ordering(970) 00:11:32.329 fused_ordering(971) 00:11:32.329 fused_ordering(972) 00:11:32.329 fused_ordering(973) 00:11:32.329 fused_ordering(974) 00:11:32.329 fused_ordering(975) 00:11:32.329 fused_ordering(976) 00:11:32.329 fused_ordering(977) 00:11:32.329 fused_ordering(978) 00:11:32.329 fused_ordering(979) 00:11:32.329 fused_ordering(980) 00:11:32.329 fused_ordering(981) 00:11:32.329 fused_ordering(982) 00:11:32.329 fused_ordering(983) 00:11:32.329 fused_ordering(984) 00:11:32.329 fused_ordering(985) 00:11:32.329 fused_ordering(986) 00:11:32.329 fused_ordering(987) 00:11:32.329 fused_ordering(988) 00:11:32.329 fused_ordering(989) 00:11:32.329 fused_ordering(990) 00:11:32.329 fused_ordering(991) 00:11:32.329 fused_ordering(992) 00:11:32.329 fused_ordering(993) 00:11:32.329 fused_ordering(994) 00:11:32.329 fused_ordering(995) 00:11:32.329 fused_ordering(996) 00:11:32.329 fused_ordering(997) 00:11:32.329 fused_ordering(998) 00:11:32.329 fused_ordering(999) 00:11:32.329 fused_ordering(1000) 00:11:32.329 fused_ordering(1001) 00:11:32.329 fused_ordering(1002) 00:11:32.329 fused_ordering(1003) 00:11:32.329 fused_ordering(1004) 00:11:32.329 fused_ordering(1005) 00:11:32.329 fused_ordering(1006) 00:11:32.329 fused_ordering(1007) 00:11:32.329 fused_ordering(1008) 00:11:32.329 fused_ordering(1009) 00:11:32.329 fused_ordering(1010) 00:11:32.329 fused_ordering(1011) 00:11:32.329 fused_ordering(1012) 00:11:32.329 fused_ordering(1013) 00:11:32.329 fused_ordering(1014) 00:11:32.329 fused_ordering(1015) 00:11:32.329 fused_ordering(1016) 00:11:32.329 fused_ordering(1017) 00:11:32.329 fused_ordering(1018) 00:11:32.329 fused_ordering(1019) 00:11:32.329 fused_ordering(1020) 00:11:32.329 fused_ordering(1021) 00:11:32.329 fused_ordering(1022) 00:11:32.329 fused_ordering(1023) 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.329 rmmod nvme_tcp 00:11:32.329 rmmod nvme_fabrics 00:11:32.329 rmmod nvme_keyring 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2119281 ']' 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2119281 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2119281 ']' 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2119281 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:32.329 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2119281 00:11:32.588 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:32.588 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:32.588 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2119281' 00:11:32.588 killing process with pid 2119281 00:11:32.588 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2119281 00:11:32.588 12:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2119281 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.588 12:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.115 12:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.115 00:11:35.115 real 0m12.364s 00:11:35.115 user 0m6.002s 00:11:35.115 sys 0m6.963s 00:11:35.115 12:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:35.115 12:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.115 ************************************ 00:11:35.115 END TEST nvmf_fused_ordering 00:11:35.115 ************************************ 00:11:35.115 12:00:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:35.115 12:00:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:35.115 12:00:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:35.115 12:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.115 ************************************ 00:11:35.115 START TEST nvmf_delete_subsystem 00:11:35.115 ************************************ 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:35.115 * Looking for test storage... 00:11:35.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.115 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.116 12:00:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:41.671 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:41.671 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:41.671 Found net devices under 0000:af:00.0: cvl_0_0 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:41.671 Found net devices under 0000:af:00.1: cvl_0_1 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.671 12:00:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.671 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.671 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.671 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.671 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:11:41.929 00:11:41.929 --- 10.0.0.2 ping statistics --- 00:11:41.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.929 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:41.929 00:11:41.929 --- 10.0.0.1 ping statistics --- 00:11:41.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.929 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2123522 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2123522 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2123522 ']' 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:41.929 12:00:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.929 [2024-06-10 12:00:31.358282] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:11:41.929 [2024-06-10 12:00:31.358327] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.929 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.929 [2024-06-10 12:00:31.431942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:42.185 [2024-06-10 12:00:31.500827] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.185 [2024-06-10 12:00:31.500870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.185 [2024-06-10 12:00:31.500879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.185 [2024-06-10 12:00:31.500887] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.185 [2024-06-10 12:00:31.500910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.185 [2024-06-10 12:00:31.500962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.185 [2024-06-10 12:00:31.500966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 [2024-06-10 12:00:32.208758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 [2024-06-10 12:00:32.224960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 NULL1 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 Delay0 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2123803 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:42.746 12:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:43.002 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.002 [2024-06-10 12:00:32.309565] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:44.890 12:00:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.890 12:00:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.890 12:00:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 [2024-06-10 12:00:34.430092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f49dc00c600 is same with the state(5) to be set 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 starting I/O failed: -6 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 [2024-06-10 12:00:34.430721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e070 is same with the state(5) to be set 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Write completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 Read completed with error (sct=0, sc=8) 00:11:45.147 [2024-06-10 12:00:34.430939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f49dc000c00 is same with the state(5) to be set 00:11:46.123 [2024-06-10 12:00:35.404946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54f1a0 is same with the state(5) to be set 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 [2024-06-10 12:00:35.432398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f49dc00c2f0 is same with the state(5) to be set 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 [2024-06-10 12:00:35.432771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54fc30 is same with the state(5) to be set 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 [2024-06-10 12:00:35.432943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e250 is same with the state(5) to be set 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Write completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 Read completed with error (sct=0, sc=8) 00:11:46.123 [2024-06-10 12:00:35.433114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x550d20 is same with the state(5) to be set 00:11:46.123 Initializing NVMe Controllers 00:11:46.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.123 Controller IO queue size 128, less than required. 00:11:46.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:46.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:46.123 Initialization complete. Launching workers. 00:11:46.123 ======================================================== 00:11:46.123 Latency(us) 00:11:46.123 Device Information : IOPS MiB/s Average min max 00:11:46.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.12 0.09 945813.97 866.91 1011873.27 00:11:46.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.41 0.07 882834.54 438.87 1012057.87 00:11:46.123 ======================================================== 00:11:46.123 Total : 347.52 0.17 918013.05 438.87 1012057.87 00:11:46.123 00:11:46.123 [2024-06-10 12:00:35.433826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54f1a0 (9): Bad file descriptor 00:11:46.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:46.123 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.123 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:46.123 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2123803 00:11:46.123 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2123803 00:11:46.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2123803) - No such process 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2123803 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2123803 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2123803 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.686 [2024-06-10 12:00:35.961579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2124348 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:46.686 12:00:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.686 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.686 [2024-06-10 12:00:36.031323] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:47.248 12:00:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.248 12:00:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:47.248 12:00:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.504 12:00:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.504 12:00:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:47.504 12:00:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.071 12:00:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.071 12:00:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:48.071 12:00:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.637 12:00:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.637 12:00:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:48.637 12:00:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.203 12:00:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.203 12:00:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:49.203 12:00:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.773 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.773 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:49.773 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.773 Initializing NVMe Controllers 00:11:49.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.773 Controller IO queue size 128, less than required. 00:11:49.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:49.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:49.773 Initialization complete. Launching workers. 00:11:49.773 ======================================================== 00:11:49.773 Latency(us) 00:11:49.773 Device Information : IOPS MiB/s Average min max 00:11:49.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002795.85 1000158.09 1009538.67 00:11:49.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004350.01 1000197.99 1010833.95 00:11:49.773 ======================================================== 00:11:49.773 Total : 256.00 0.12 1003572.93 1000158.09 1010833.95 00:11:49.773 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2124348 00:11:50.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2124348) - No such process 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2124348 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.074 rmmod nvme_tcp 00:11:50.074 rmmod nvme_fabrics 00:11:50.074 rmmod nvme_keyring 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2123522 ']' 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2123522 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2123522 ']' 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2123522 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:50.074 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2123522 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2123522' 00:11:50.359 killing process with pid 2123522 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2123522 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2123522 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.359 12:00:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.893 12:00:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.893 00:11:52.893 real 0m17.680s 00:11:52.893 user 0m29.606s 00:11:52.893 sys 0m7.172s 00:11:52.893 12:00:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:52.893 12:00:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.893 ************************************ 00:11:52.893 END TEST nvmf_delete_subsystem 00:11:52.893 ************************************ 00:11:52.893 12:00:41 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:52.893 12:00:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:52.893 12:00:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:52.893 12:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:52.893 ************************************ 00:11:52.893 START TEST nvmf_ns_masking 00:11:52.893 ************************************ 00:11:52.893 12:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:52.893 * Looking for test storage... 00:11:52.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=ec7f723f-b13c-4d93-9129-1250f916c2e8 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.893 12:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.458 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.458 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.458 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:59.459 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:59.459 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:59.459 Found net devices under 0000:af:00.0: cvl_0_0 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:59.459 Found net devices under 0000:af:00.1: cvl_0_1 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.459 12:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.717 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.717 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.717 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:11:59.717 00:11:59.717 --- 10.0.0.2 ping statistics --- 00:11:59.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.717 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:59.717 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:11:59.717 00:11:59.717 --- 10.0.0.1 ping statistics --- 00:11:59.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.717 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2128662 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2128662 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2128662 ']' 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:59.718 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.718 [2024-06-10 12:00:49.167498] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:11:59.718 [2024-06-10 12:00:49.167561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.718 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.975 [2024-06-10 12:00:49.241320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.975 [2024-06-10 12:00:49.310421] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.975 [2024-06-10 12:00:49.310463] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.975 [2024-06-10 12:00:49.310473] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.975 [2024-06-10 12:00:49.310503] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.975 [2024-06-10 12:00:49.310510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.975 [2024-06-10 12:00:49.310560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.975 [2024-06-10 12:00:49.310654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.975 [2024-06-10 12:00:49.310741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.975 [2024-06-10 12:00:49.310742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.539 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:00.539 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:12:00.539 12:00:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.539 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:00.539 12:00:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.539 12:00:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.539 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.796 [2024-06-10 12:00:50.168826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.796 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:00.796 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:00.796 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.051 Malloc1 00:12:01.051 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:01.051 Malloc2 00:12:01.308 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.308 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:01.564 12:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.821 [2024-06-10 12:00:51.120281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ec7f723f-b13c-4d93-9129-1250f916c2e8 -a 10.0.0.2 -s 4420 -i 4 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:01.821 12:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:04.341 [ 0]:0x1 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=478ecfb923234a6fa2aedfd316d8c661 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 478ecfb923234a6fa2aedfd316d8c661 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.341 [ 0]:0x1 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=478ecfb923234a6fa2aedfd316d8c661 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 478ecfb923234a6fa2aedfd316d8c661 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.341 [ 1]:0x2 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:04.341 12:00:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.598 12:00:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.855 12:00:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ec7f723f-b13c-4d93-9129-1250f916c2e8 -a 10.0.0.2 -s 4420 -i 4 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:12:05.112 12:00:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.635 [ 0]:0x2 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.635 [ 0]:0x1 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=478ecfb923234a6fa2aedfd316d8c661 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 478ecfb923234a6fa2aedfd316d8c661 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.635 [ 1]:0x2 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.635 12:00:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.635 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:07.635 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.635 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.893 [ 0]:0x2 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:07.893 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.894 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ec7f723f-b13c-4d93-9129-1250f916c2e8 -a 10.0.0.2 -s 4420 -i 4 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:08.150 12:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.671 [ 0]:0x1 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=478ecfb923234a6fa2aedfd316d8c661 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 478ecfb923234a6fa2aedfd316d8c661 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.671 [ 1]:0x2 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.671 12:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.671 [ 0]:0x2 00:12:10.671 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.672 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.929 [2024-06-10 12:01:00.358435] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:10.929 request: 00:12:10.929 { 00:12:10.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.929 "nsid": 2, 00:12:10.929 "host": "nqn.2016-06.io.spdk:host1", 00:12:10.929 "method": "nvmf_ns_remove_host", 00:12:10.929 "req_id": 1 00:12:10.929 } 00:12:10.929 Got JSON-RPC error response 00:12:10.929 response: 00:12:10.929 { 00:12:10.929 "code": -32602, 00:12:10.929 "message": "Invalid parameters" 00:12:10.929 } 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.929 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:11.186 [ 0]:0x2 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d67586f168d4a12bebccbebe57c778d 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d67586f168d4a12bebccbebe57c778d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.186 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.444 rmmod nvme_tcp 00:12:11.444 rmmod nvme_fabrics 00:12:11.444 rmmod nvme_keyring 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2128662 ']' 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2128662 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2128662 ']' 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2128662 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:11.444 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2128662 00:12:11.703 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:11.703 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:11.703 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2128662' 00:12:11.703 killing process with pid 2128662 00:12:11.703 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2128662 00:12:11.703 12:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2128662 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.703 12:01:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.230 12:01:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.230 00:12:14.230 real 0m21.311s 00:12:14.230 user 0m51.093s 00:12:14.230 sys 0m7.926s 00:12:14.230 12:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:14.230 12:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:14.230 ************************************ 00:12:14.230 END TEST nvmf_ns_masking 00:12:14.230 ************************************ 00:12:14.230 12:01:03 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:14.230 12:01:03 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:14.230 12:01:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:14.230 12:01:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:14.230 12:01:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.230 ************************************ 00:12:14.230 START TEST nvmf_nvme_cli 00:12:14.230 ************************************ 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:14.230 * Looking for test storage... 00:12:14.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.230 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.231 12:01:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:20.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:20.789 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:20.789 Found net devices under 0000:af:00.0: cvl_0_0 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:20.789 Found net devices under 0000:af:00.1: cvl_0_1 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.789 12:01:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.789 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.789 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.789 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.789 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:12:20.790 00:12:20.790 --- 10.0.0.2 ping statistics --- 00:12:20.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.790 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:20.790 00:12:20.790 --- 10.0.0.1 ping statistics --- 00:12:20.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.790 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2134564 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2134564 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 2134564 ']' 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:20.790 12:01:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.790 [2024-06-10 12:01:10.278196] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:12:20.790 [2024-06-10 12:01:10.278241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.048 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.048 [2024-06-10 12:01:10.352020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.048 [2024-06-10 12:01:10.425550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.048 [2024-06-10 12:01:10.425587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.048 [2024-06-10 12:01:10.425597] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.048 [2024-06-10 12:01:10.425605] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.048 [2024-06-10 12:01:10.425612] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.048 [2024-06-10 12:01:10.425653] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.048 [2024-06-10 12:01:10.425740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.048 [2024-06-10 12:01:10.425760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.048 [2024-06-10 12:01:10.425762] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.637 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.637 [2024-06-10 12:01:11.143400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 Malloc0 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 Malloc1 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 [2024-06-10 12:01:11.227800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:12:21.970 00:12:21.970 Discovery Log Number of Records 2, Generation counter 2 00:12:21.970 =====Discovery Log Entry 0====== 00:12:21.970 trtype: tcp 00:12:21.970 adrfam: ipv4 00:12:21.970 subtype: current discovery subsystem 00:12:21.970 treq: not required 00:12:21.970 portid: 0 00:12:21.970 trsvcid: 4420 00:12:21.970 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.970 traddr: 10.0.0.2 00:12:21.970 eflags: explicit discovery connections, duplicate discovery information 00:12:21.970 sectype: none 00:12:21.970 =====Discovery Log Entry 1====== 00:12:21.970 trtype: tcp 00:12:21.970 adrfam: ipv4 00:12:21.970 subtype: nvme subsystem 00:12:21.970 treq: not required 00:12:21.970 portid: 0 00:12:21.970 trsvcid: 4420 00:12:21.970 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:21.970 traddr: 10.0.0.2 00:12:21.970 eflags: none 00:12:21.970 sectype: none 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:21.970 12:01:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.345 12:01:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:23.345 12:01:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:12:23.345 12:01:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.345 12:01:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:23.345 12:01:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:23.345 12:01:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.247 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:25.506 /dev/nvme0n1 ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:25.506 12:01:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:25.764 12:01:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.765 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.765 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.023 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.024 rmmod nvme_tcp 00:12:26.024 rmmod nvme_fabrics 00:12:26.024 rmmod nvme_keyring 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2134564 ']' 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2134564 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 2134564 ']' 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 2134564 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2134564 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2134564' 00:12:26.024 killing process with pid 2134564 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 2134564 00:12:26.024 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 2134564 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.281 12:01:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.816 12:01:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.816 00:12:28.816 real 0m14.358s 00:12:28.816 user 0m22.428s 00:12:28.816 sys 0m5.991s 00:12:28.816 12:01:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:28.816 12:01:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:28.816 ************************************ 00:12:28.816 END TEST nvmf_nvme_cli 00:12:28.816 ************************************ 00:12:28.816 12:01:17 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:28.816 12:01:17 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.816 12:01:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:28.816 12:01:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:28.816 12:01:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.816 ************************************ 00:12:28.816 START TEST nvmf_vfio_user 00:12:28.816 ************************************ 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.816 * Looking for test storage... 00:12:28.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:28.816 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2136030 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2136030' 00:12:28.817 Process pid: 2136030 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2136030 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2136030 ']' 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:28.817 12:01:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:28.817 [2024-06-10 12:01:17.984511] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:12:28.817 [2024-06-10 12:01:17.984561] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.817 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.817 [2024-06-10 12:01:18.052189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.817 [2024-06-10 12:01:18.125425] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.817 [2024-06-10 12:01:18.125466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.817 [2024-06-10 12:01:18.125478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.817 [2024-06-10 12:01:18.125487] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.817 [2024-06-10 12:01:18.125494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.817 [2024-06-10 12:01:18.125553] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.817 [2024-06-10 12:01:18.129492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.817 [2024-06-10 12:01:18.129512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.817 [2024-06-10 12:01:18.129515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.385 12:01:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:29.385 12:01:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:29.385 12:01:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:30.322 12:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:30.581 12:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:30.581 12:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:30.581 12:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.581 12:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:30.581 12:01:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.840 Malloc1 00:12:30.840 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:31.099 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:31.099 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:31.357 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.357 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:31.357 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:31.616 Malloc2 00:12:31.616 12:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:31.875 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:31.875 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.136 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:32.136 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:32.136 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.136 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.136 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.136 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:32.136 [2024-06-10 12:01:21.523620] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:12:32.136 [2024-06-10 12:01:21.523657] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136599 ] 00:12:32.136 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.136 [2024-06-10 12:01:21.555822] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:32.136 [2024-06-10 12:01:21.563869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.136 [2024-06-10 12:01:21.563891] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f60f9595000 00:12:32.136 [2024-06-10 12:01:21.564869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.565869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.566881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.567888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.568892] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.569900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.570897] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.571905] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.136 [2024-06-10 12:01:21.572913] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.136 [2024-06-10 12:01:21.572927] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f60f958a000 00:12:32.136 [2024-06-10 12:01:21.573821] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.136 [2024-06-10 12:01:21.583114] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:32.136 [2024-06-10 12:01:21.583136] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:32.136 [2024-06-10 12:01:21.588002] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.136 [2024-06-10 12:01:21.588041] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:32.136 [2024-06-10 12:01:21.588109] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:32.136 [2024-06-10 12:01:21.588128] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:32.136 [2024-06-10 12:01:21.588135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:32.136 [2024-06-10 12:01:21.588998] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:32.137 [2024-06-10 12:01:21.589008] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:32.137 [2024-06-10 12:01:21.589017] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:32.137 [2024-06-10 12:01:21.590005] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.137 [2024-06-10 12:01:21.590014] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:32.137 [2024-06-10 12:01:21.590023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:32.137 [2024-06-10 12:01:21.591009] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:32.137 [2024-06-10 12:01:21.591018] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:32.137 [2024-06-10 12:01:21.592015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:32.137 [2024-06-10 12:01:21.592025] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:32.137 [2024-06-10 12:01:21.592031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:32.137 [2024-06-10 12:01:21.592039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:32.137 [2024-06-10 12:01:21.592146] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:32.137 [2024-06-10 12:01:21.592152] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:32.137 [2024-06-10 12:01:21.592159] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:32.137 [2024-06-10 12:01:21.593024] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:32.137 [2024-06-10 12:01:21.594033] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:32.137 [2024-06-10 12:01:21.595039] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.137 [2024-06-10 12:01:21.596033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.137 [2024-06-10 12:01:21.596130] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:32.137 [2024-06-10 12:01:21.597047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:32.137 [2024-06-10 12:01:21.597056] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:32.137 [2024-06-10 12:01:21.597062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597081] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:32.137 [2024-06-10 12:01:21.597094] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597113] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.137 [2024-06-10 12:01:21.597120] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.137 [2024-06-10 12:01:21.597133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597190] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:32.137 [2024-06-10 12:01:21.597196] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:32.137 [2024-06-10 12:01:21.597202] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:32.137 [2024-06-10 12:01:21.597211] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:32.137 [2024-06-10 12:01:21.597217] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:32.137 [2024-06-10 12:01:21.597223] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:32.137 [2024-06-10 12:01:21.597229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597238] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.137 [2024-06-10 12:01:21.597284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.137 [2024-06-10 12:01:21.597293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.137 [2024-06-10 12:01:21.597301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.137 [2024-06-10 12:01:21.597309] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597344] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:32.137 [2024-06-10 12:01:21.597351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597359] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597365] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597425] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597435] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597443] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:32.137 [2024-06-10 12:01:21.597449] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:32.137 [2024-06-10 12:01:21.597456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597482] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:32.137 [2024-06-10 12:01:21.597492] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597501] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597509] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.137 [2024-06-10 12:01:21.597515] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.137 [2024-06-10 12:01:21.597522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597551] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597569] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.137 [2024-06-10 12:01:21.597575] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.137 [2024-06-10 12:01:21.597582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.137 [2024-06-10 12:01:21.597592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:32.137 [2024-06-10 12:01:21.597601] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597609] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597618] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597631] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:32.137 [2024-06-10 12:01:21.597637] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:32.138 [2024-06-10 12:01:21.597643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:32.138 [2024-06-10 12:01:21.597649] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:32.138 [2024-06-10 12:01:21.597670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597761] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:32.138 [2024-06-10 12:01:21.597767] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:32.138 [2024-06-10 12:01:21.597771] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:32.138 [2024-06-10 12:01:21.597776] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:32.138 [2024-06-10 12:01:21.597783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:32.138 [2024-06-10 12:01:21.597791] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:32.138 [2024-06-10 12:01:21.597798] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:32.138 [2024-06-10 12:01:21.597805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597813] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:32.138 [2024-06-10 12:01:21.597818] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.138 [2024-06-10 12:01:21.597825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597833] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:32.138 [2024-06-10 12:01:21.597839] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:32.138 [2024-06-10 12:01:21.597845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:32.138 [2024-06-10 12:01:21.597853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:32.138 [2024-06-10 12:01:21.597890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:32.138 ===================================================== 00:12:32.138 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.138 ===================================================== 00:12:32.138 Controller Capabilities/Features 00:12:32.138 ================================ 00:12:32.138 Vendor ID: 4e58 00:12:32.138 Subsystem Vendor ID: 4e58 00:12:32.138 Serial Number: SPDK1 00:12:32.138 Model Number: SPDK bdev Controller 00:12:32.138 Firmware Version: 24.09 00:12:32.138 Recommended Arb Burst: 6 00:12:32.138 IEEE OUI Identifier: 8d 6b 50 00:12:32.138 Multi-path I/O 00:12:32.138 May have multiple subsystem ports: Yes 00:12:32.138 May have multiple controllers: Yes 00:12:32.138 Associated with SR-IOV VF: No 00:12:32.138 Max Data Transfer Size: 131072 00:12:32.138 Max Number of Namespaces: 32 00:12:32.138 Max Number of I/O Queues: 127 00:12:32.138 NVMe Specification Version (VS): 1.3 00:12:32.138 NVMe Specification Version (Identify): 1.3 00:12:32.138 Maximum Queue Entries: 256 00:12:32.138 Contiguous Queues Required: Yes 00:12:32.138 Arbitration Mechanisms Supported 00:12:32.138 Weighted Round Robin: Not Supported 00:12:32.138 Vendor Specific: Not Supported 00:12:32.138 Reset Timeout: 15000 ms 00:12:32.138 Doorbell Stride: 4 bytes 00:12:32.138 NVM Subsystem Reset: Not Supported 00:12:32.138 Command Sets Supported 00:12:32.138 NVM Command Set: Supported 00:12:32.138 Boot Partition: Not Supported 00:12:32.138 Memory Page Size Minimum: 4096 bytes 00:12:32.138 Memory Page Size Maximum: 4096 bytes 00:12:32.138 Persistent Memory Region: Not Supported 00:12:32.138 Optional Asynchronous Events Supported 00:12:32.138 Namespace Attribute Notices: Supported 00:12:32.138 Firmware Activation Notices: Not Supported 00:12:32.138 ANA Change Notices: Not Supported 00:12:32.138 PLE Aggregate Log Change Notices: Not Supported 00:12:32.138 LBA Status Info Alert Notices: Not Supported 00:12:32.138 EGE Aggregate Log Change Notices: Not Supported 00:12:32.138 Normal NVM Subsystem Shutdown event: Not Supported 00:12:32.138 Zone Descriptor Change Notices: Not Supported 00:12:32.138 Discovery Log Change Notices: Not Supported 00:12:32.138 Controller Attributes 00:12:32.138 128-bit Host Identifier: Supported 00:12:32.138 Non-Operational Permissive Mode: Not Supported 00:12:32.138 NVM Sets: Not Supported 00:12:32.138 Read Recovery Levels: Not Supported 00:12:32.138 Endurance Groups: Not Supported 00:12:32.138 Predictable Latency Mode: Not Supported 00:12:32.138 Traffic Based Keep ALive: Not Supported 00:12:32.138 Namespace Granularity: Not Supported 00:12:32.138 SQ Associations: Not Supported 00:12:32.138 UUID List: Not Supported 00:12:32.138 Multi-Domain Subsystem: Not Supported 00:12:32.138 Fixed Capacity Management: Not Supported 00:12:32.138 Variable Capacity Management: Not Supported 00:12:32.138 Delete Endurance Group: Not Supported 00:12:32.138 Delete NVM Set: Not Supported 00:12:32.138 Extended LBA Formats Supported: Not Supported 00:12:32.138 Flexible Data Placement Supported: Not Supported 00:12:32.138 00:12:32.138 Controller Memory Buffer Support 00:12:32.138 ================================ 00:12:32.138 Supported: No 00:12:32.138 00:12:32.138 Persistent Memory Region Support 00:12:32.138 ================================ 00:12:32.138 Supported: No 00:12:32.138 00:12:32.138 Admin Command Set Attributes 00:12:32.138 ============================ 00:12:32.138 Security Send/Receive: Not Supported 00:12:32.138 Format NVM: Not Supported 00:12:32.138 Firmware Activate/Download: Not Supported 00:12:32.138 Namespace Management: Not Supported 00:12:32.138 Device Self-Test: Not Supported 00:12:32.138 Directives: Not Supported 00:12:32.138 NVMe-MI: Not Supported 00:12:32.138 Virtualization Management: Not Supported 00:12:32.138 Doorbell Buffer Config: Not Supported 00:12:32.138 Get LBA Status Capability: Not Supported 00:12:32.138 Command & Feature Lockdown Capability: Not Supported 00:12:32.138 Abort Command Limit: 4 00:12:32.138 Async Event Request Limit: 4 00:12:32.138 Number of Firmware Slots: N/A 00:12:32.138 Firmware Slot 1 Read-Only: N/A 00:12:32.138 Firmware Activation Without Reset: N/A 00:12:32.138 Multiple Update Detection Support: N/A 00:12:32.138 Firmware Update Granularity: No Information Provided 00:12:32.138 Per-Namespace SMART Log: No 00:12:32.138 Asymmetric Namespace Access Log Page: Not Supported 00:12:32.138 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:32.138 Command Effects Log Page: Supported 00:12:32.138 Get Log Page Extended Data: Supported 00:12:32.138 Telemetry Log Pages: Not Supported 00:12:32.138 Persistent Event Log Pages: Not Supported 00:12:32.138 Supported Log Pages Log Page: May Support 00:12:32.138 Commands Supported & Effects Log Page: Not Supported 00:12:32.138 Feature Identifiers & Effects Log Page:May Support 00:12:32.138 NVMe-MI Commands & Effects Log Page: May Support 00:12:32.138 Data Area 4 for Telemetry Log: Not Supported 00:12:32.138 Error Log Page Entries Supported: 128 00:12:32.138 Keep Alive: Supported 00:12:32.138 Keep Alive Granularity: 10000 ms 00:12:32.138 00:12:32.138 NVM Command Set Attributes 00:12:32.138 ========================== 00:12:32.138 Submission Queue Entry Size 00:12:32.138 Max: 64 00:12:32.138 Min: 64 00:12:32.138 Completion Queue Entry Size 00:12:32.138 Max: 16 00:12:32.138 Min: 16 00:12:32.138 Number of Namespaces: 32 00:12:32.138 Compare Command: Supported 00:12:32.138 Write Uncorrectable Command: Not Supported 00:12:32.138 Dataset Management Command: Supported 00:12:32.138 Write Zeroes Command: Supported 00:12:32.138 Set Features Save Field: Not Supported 00:12:32.138 Reservations: Not Supported 00:12:32.138 Timestamp: Not Supported 00:12:32.138 Copy: Supported 00:12:32.138 Volatile Write Cache: Present 00:12:32.138 Atomic Write Unit (Normal): 1 00:12:32.138 Atomic Write Unit (PFail): 1 00:12:32.138 Atomic Compare & Write Unit: 1 00:12:32.139 Fused Compare & Write: Supported 00:12:32.139 Scatter-Gather List 00:12:32.139 SGL Command Set: Supported (Dword aligned) 00:12:32.139 SGL Keyed: Not Supported 00:12:32.139 SGL Bit Bucket Descriptor: Not Supported 00:12:32.139 SGL Metadata Pointer: Not Supported 00:12:32.139 Oversized SGL: Not Supported 00:12:32.139 SGL Metadata Address: Not Supported 00:12:32.139 SGL Offset: Not Supported 00:12:32.139 Transport SGL Data Block: Not Supported 00:12:32.139 Replay Protected Memory Block: Not Supported 00:12:32.139 00:12:32.139 Firmware Slot Information 00:12:32.139 ========================= 00:12:32.139 Active slot: 1 00:12:32.139 Slot 1 Firmware Revision: 24.09 00:12:32.139 00:12:32.139 00:12:32.139 Commands Supported and Effects 00:12:32.139 ============================== 00:12:32.139 Admin Commands 00:12:32.139 -------------- 00:12:32.139 Get Log Page (02h): Supported 00:12:32.139 Identify (06h): Supported 00:12:32.139 Abort (08h): Supported 00:12:32.139 Set Features (09h): Supported 00:12:32.139 Get Features (0Ah): Supported 00:12:32.139 Asynchronous Event Request (0Ch): Supported 00:12:32.139 Keep Alive (18h): Supported 00:12:32.139 I/O Commands 00:12:32.139 ------------ 00:12:32.139 Flush (00h): Supported LBA-Change 00:12:32.139 Write (01h): Supported LBA-Change 00:12:32.139 Read (02h): Supported 00:12:32.139 Compare (05h): Supported 00:12:32.139 Write Zeroes (08h): Supported LBA-Change 00:12:32.139 Dataset Management (09h): Supported LBA-Change 00:12:32.139 Copy (19h): Supported LBA-Change 00:12:32.139 Unknown (79h): Supported LBA-Change 00:12:32.139 Unknown (7Ah): Supported 00:12:32.139 00:12:32.139 Error Log 00:12:32.139 ========= 00:12:32.139 00:12:32.139 Arbitration 00:12:32.139 =========== 00:12:32.139 Arbitration Burst: 1 00:12:32.139 00:12:32.139 Power Management 00:12:32.139 ================ 00:12:32.139 Number of Power States: 1 00:12:32.139 Current Power State: Power State #0 00:12:32.139 Power State #0: 00:12:32.139 Max Power: 0.00 W 00:12:32.139 Non-Operational State: Operational 00:12:32.139 Entry Latency: Not Reported 00:12:32.139 Exit Latency: Not Reported 00:12:32.139 Relative Read Throughput: 0 00:12:32.139 Relative Read Latency: 0 00:12:32.139 Relative Write Throughput: 0 00:12:32.139 Relative Write Latency: 0 00:12:32.139 Idle Power: Not Reported 00:12:32.139 Active Power: Not Reported 00:12:32.139 Non-Operational Permissive Mode: Not Supported 00:12:32.139 00:12:32.139 Health Information 00:12:32.139 ================== 00:12:32.139 Critical Warnings: 00:12:32.139 Available Spare Space: OK 00:12:32.139 Temperature: OK 00:12:32.139 Device Reliability: OK 00:12:32.139 Read Only: No 00:12:32.139 Volatile Memory Backup: OK 00:12:32.139 Current Temperature: 0 Kelvin (-2[2024-06-10 12:01:21.597978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:32.139 [2024-06-10 12:01:21.597989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:32.139 [2024-06-10 12:01:21.598015] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:32.139 [2024-06-10 12:01:21.598026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.139 [2024-06-10 12:01:21.598033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.139 [2024-06-10 12:01:21.598041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.139 [2024-06-10 12:01:21.598049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.139 [2024-06-10 12:01:21.601483] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.139 [2024-06-10 12:01:21.601496] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:32.139 [2024-06-10 12:01:21.602065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.139 [2024-06-10 12:01:21.602113] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:32.139 [2024-06-10 12:01:21.602120] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:32.139 [2024-06-10 12:01:21.603072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:32.139 [2024-06-10 12:01:21.603084] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:32.139 [2024-06-10 12:01:21.603134] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:32.139 [2024-06-10 12:01:21.604102] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.139 73 Celsius) 00:12:32.139 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:32.139 Available Spare: 0% 00:12:32.139 Available Spare Threshold: 0% 00:12:32.139 Life Percentage Used: 0% 00:12:32.139 Data Units Read: 0 00:12:32.139 Data Units Written: 0 00:12:32.139 Host Read Commands: 0 00:12:32.139 Host Write Commands: 0 00:12:32.139 Controller Busy Time: 0 minutes 00:12:32.139 Power Cycles: 0 00:12:32.139 Power On Hours: 0 hours 00:12:32.139 Unsafe Shutdowns: 0 00:12:32.139 Unrecoverable Media Errors: 0 00:12:32.139 Lifetime Error Log Entries: 0 00:12:32.139 Warning Temperature Time: 0 minutes 00:12:32.139 Critical Temperature Time: 0 minutes 00:12:32.139 00:12:32.139 Number of Queues 00:12:32.139 ================ 00:12:32.139 Number of I/O Submission Queues: 127 00:12:32.139 Number of I/O Completion Queues: 127 00:12:32.139 00:12:32.139 Active Namespaces 00:12:32.139 ================= 00:12:32.139 Namespace ID:1 00:12:32.139 Error Recovery Timeout: Unlimited 00:12:32.139 Command Set Identifier: NVM (00h) 00:12:32.139 Deallocate: Supported 00:12:32.139 Deallocated/Unwritten Error: Not Supported 00:12:32.139 Deallocated Read Value: Unknown 00:12:32.139 Deallocate in Write Zeroes: Not Supported 00:12:32.139 Deallocated Guard Field: 0xFFFF 00:12:32.139 Flush: Supported 00:12:32.139 Reservation: Supported 00:12:32.139 Namespace Sharing Capabilities: Multiple Controllers 00:12:32.139 Size (in LBAs): 131072 (0GiB) 00:12:32.139 Capacity (in LBAs): 131072 (0GiB) 00:12:32.139 Utilization (in LBAs): 131072 (0GiB) 00:12:32.139 NGUID: 11A09C8A03DC4A0B8714530861AF0A51 00:12:32.139 UUID: 11a09c8a-03dc-4a0b-8714-530861af0a51 00:12:32.139 Thin Provisioning: Not Supported 00:12:32.139 Per-NS Atomic Units: Yes 00:12:32.139 Atomic Boundary Size (Normal): 0 00:12:32.139 Atomic Boundary Size (PFail): 0 00:12:32.139 Atomic Boundary Offset: 0 00:12:32.139 Maximum Single Source Range Length: 65535 00:12:32.139 Maximum Copy Length: 65535 00:12:32.139 Maximum Source Range Count: 1 00:12:32.139 NGUID/EUI64 Never Reused: No 00:12:32.139 Namespace Write Protected: No 00:12:32.139 Number of LBA Formats: 1 00:12:32.139 Current LBA Format: LBA Format #00 00:12:32.139 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:32.139 00:12:32.139 12:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:32.399 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.399 [2024-06-10 12:01:21.818275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.675 Initializing NVMe Controllers 00:12:37.675 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.675 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:37.675 Initialization complete. Launching workers. 00:12:37.675 ======================================================== 00:12:37.675 Latency(us) 00:12:37.675 Device Information : IOPS MiB/s Average min max 00:12:37.675 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39958.02 156.09 3203.18 894.95 10687.89 00:12:37.675 ======================================================== 00:12:37.675 Total : 39958.02 156.09 3203.18 894.95 10687.89 00:12:37.675 00:12:37.675 [2024-06-10 12:01:26.839408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.675 12:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:37.675 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.675 [2024-06-10 12:01:27.054442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.949 Initializing NVMe Controllers 00:12:42.949 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:42.949 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:42.949 Initialization complete. Launching workers. 00:12:42.949 ======================================================== 00:12:42.949 Latency(us) 00:12:42.949 Device Information : IOPS MiB/s Average min max 00:12:42.949 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.05 62.65 7979.85 6985.80 8974.42 00:12:42.949 ======================================================== 00:12:42.949 Total : 16039.05 62.65 7979.85 6985.80 8974.42 00:12:42.949 00:12:42.949 [2024-06-10 12:01:32.085127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.949 12:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:42.949 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.949 [2024-06-10 12:01:32.308138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.222 [2024-06-10 12:01:37.388794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.222 Initializing NVMe Controllers 00:12:48.222 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.222 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.222 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:48.222 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:48.222 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:48.222 Initialization complete. Launching workers. 00:12:48.222 Starting thread on core 2 00:12:48.222 Starting thread on core 3 00:12:48.222 Starting thread on core 1 00:12:48.222 12:01:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:48.222 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.222 [2024-06-10 12:01:37.686835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.611 [2024-06-10 12:01:40.746689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.611 Initializing NVMe Controllers 00:12:51.611 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.611 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:51.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:51.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:51.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:51.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:51.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:51.611 Initialization complete. Launching workers. 00:12:51.611 Starting thread on core 1 with urgent priority queue 00:12:51.611 Starting thread on core 2 with urgent priority queue 00:12:51.611 Starting thread on core 3 with urgent priority queue 00:12:51.611 Starting thread on core 0 with urgent priority queue 00:12:51.611 SPDK bdev Controller (SPDK1 ) core 0: 9220.33 IO/s 10.85 secs/100000 ios 00:12:51.611 SPDK bdev Controller (SPDK1 ) core 1: 7760.00 IO/s 12.89 secs/100000 ios 00:12:51.611 SPDK bdev Controller (SPDK1 ) core 2: 7887.33 IO/s 12.68 secs/100000 ios 00:12:51.611 SPDK bdev Controller (SPDK1 ) core 3: 10009.00 IO/s 9.99 secs/100000 ios 00:12:51.611 ======================================================== 00:12:51.611 00:12:51.611 12:01:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:51.611 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.611 [2024-06-10 12:01:41.027883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.611 Initializing NVMe Controllers 00:12:51.611 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.611 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.611 Namespace ID: 1 size: 0GB 00:12:51.611 Initialization complete. 00:12:51.611 INFO: using host memory buffer for IO 00:12:51.611 Hello world! 00:12:51.611 [2024-06-10 12:01:41.064218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.892 12:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:51.892 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.892 [2024-06-10 12:01:41.345226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.268 Initializing NVMe Controllers 00:12:53.268 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.268 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.268 Initialization complete. Launching workers. 00:12:53.268 submit (in ns) avg, min, max = 7233.5, 3011.2, 3998518.4 00:12:53.268 complete (in ns) avg, min, max = 19312.5, 1684.0, 3998846.4 00:12:53.268 00:12:53.268 Submit histogram 00:12:53.268 ================ 00:12:53.268 Range in us Cumulative Count 00:12:53.268 3.008 - 3.021: 0.0058% ( 1) 00:12:53.268 3.059 - 3.072: 0.0232% ( 3) 00:12:53.268 3.072 - 3.085: 0.0579% ( 6) 00:12:53.268 3.085 - 3.098: 0.1680% ( 19) 00:12:53.268 3.098 - 3.110: 0.4866% ( 55) 00:12:53.268 3.110 - 3.123: 1.4019% ( 158) 00:12:53.268 3.123 - 3.136: 3.2673% ( 322) 00:12:53.268 3.136 - 3.149: 6.3840% ( 538) 00:12:53.268 3.149 - 3.162: 10.0626% ( 635) 00:12:53.268 3.162 - 3.174: 14.2683% ( 726) 00:12:53.268 3.174 - 3.187: 19.3141% ( 871) 00:12:53.268 3.187 - 3.200: 24.4757% ( 891) 00:12:53.268 3.200 - 3.213: 30.2051% ( 989) 00:12:53.268 3.213 - 3.226: 36.6470% ( 1112) 00:12:53.268 3.226 - 3.238: 42.6660% ( 1039) 00:12:53.268 3.238 - 3.251: 47.5959% ( 851) 00:12:53.268 3.251 - 3.264: 51.5120% ( 676) 00:12:53.268 3.264 - 3.277: 55.0574% ( 612) 00:12:53.268 3.277 - 3.302: 62.6463% ( 1310) 00:12:53.268 3.302 - 3.328: 70.0035% ( 1270) 00:12:53.268 3.328 - 3.354: 76.6423% ( 1146) 00:12:53.268 3.354 - 3.379: 83.7678% ( 1230) 00:12:53.268 3.379 - 3.405: 86.2241% ( 424) 00:12:53.268 3.405 - 3.430: 87.5159% ( 223) 00:12:53.268 3.430 - 3.456: 88.4312% ( 158) 00:12:53.268 3.456 - 3.482: 89.5725% ( 197) 00:12:53.268 3.482 - 3.507: 91.0092% ( 248) 00:12:53.268 3.507 - 3.533: 92.8456% ( 317) 00:12:53.268 3.533 - 3.558: 94.3518% ( 260) 00:12:53.268 3.558 - 3.584: 95.6031% ( 216) 00:12:53.268 3.584 - 3.610: 96.7501% ( 198) 00:12:53.268 3.610 - 3.635: 97.9724% ( 211) 00:12:53.268 3.635 - 3.661: 98.6328% ( 114) 00:12:53.268 3.661 - 3.686: 99.1137% ( 83) 00:12:53.268 3.686 - 3.712: 99.3048% ( 33) 00:12:53.268 3.712 - 3.738: 99.4265% ( 21) 00:12:53.268 3.738 - 3.763: 99.4555% ( 5) 00:12:53.268 3.763 - 3.789: 99.5018% ( 8) 00:12:53.268 3.789 - 3.814: 99.5250% ( 4) 00:12:53.268 3.814 - 3.840: 99.5308% ( 1) 00:12:53.268 3.891 - 3.917: 99.5366% ( 1) 00:12:53.268 4.531 - 4.557: 99.5423% ( 1) 00:12:53.268 5.350 - 5.376: 99.5539% ( 2) 00:12:53.268 5.427 - 5.453: 99.5597% ( 1) 00:12:53.268 5.453 - 5.478: 99.5655% ( 1) 00:12:53.268 5.504 - 5.530: 99.5713% ( 1) 00:12:53.268 5.530 - 5.555: 99.5829% ( 2) 00:12:53.268 5.555 - 5.581: 99.5887% ( 1) 00:12:53.268 5.606 - 5.632: 99.5945% ( 1) 00:12:53.268 5.658 - 5.683: 99.6003% ( 1) 00:12:53.268 5.683 - 5.709: 99.6061% ( 1) 00:12:53.268 5.734 - 5.760: 99.6177% ( 2) 00:12:53.268 5.786 - 5.811: 99.6235% ( 1) 00:12:53.268 5.811 - 5.837: 99.6292% ( 1) 00:12:53.268 5.837 - 5.862: 99.6350% ( 1) 00:12:53.268 5.888 - 5.914: 99.6466% ( 2) 00:12:53.268 5.914 - 5.939: 99.6582% ( 2) 00:12:53.268 5.939 - 5.965: 99.6640% ( 1) 00:12:53.268 5.965 - 5.990: 99.6698% ( 1) 00:12:53.268 5.990 - 6.016: 99.6756% ( 1) 00:12:53.268 6.042 - 6.067: 99.6872% ( 2) 00:12:53.268 6.067 - 6.093: 99.6988% ( 2) 00:12:53.268 6.118 - 6.144: 99.7161% ( 3) 00:12:53.268 6.195 - 6.221: 99.7277% ( 2) 00:12:53.268 6.246 - 6.272: 99.7335% ( 1) 00:12:53.268 6.272 - 6.298: 99.7393% ( 1) 00:12:53.268 6.323 - 6.349: 99.7509% ( 2) 00:12:53.268 6.349 - 6.374: 99.7567% ( 1) 00:12:53.268 6.374 - 6.400: 99.7683% ( 2) 00:12:53.268 6.528 - 6.554: 99.7741% ( 1) 00:12:53.268 6.554 - 6.605: 99.7799% ( 1) 00:12:53.268 6.605 - 6.656: 99.7857% ( 1) 00:12:53.268 6.656 - 6.707: 99.7914% ( 1) 00:12:53.268 6.707 - 6.758: 99.8088% ( 3) 00:12:53.268 6.758 - 6.810: 99.8204% ( 2) 00:12:53.268 6.810 - 6.861: 99.8262% ( 1) 00:12:53.268 6.912 - 6.963: 99.8320% ( 1) 00:12:53.268 6.963 - 7.014: 99.8378% ( 1) 00:12:53.268 7.066 - 7.117: 99.8494% ( 2) 00:12:53.268 [2024-06-10 12:01:42.366215] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.268 7.168 - 7.219: 99.8610% ( 2) 00:12:53.268 7.424 - 7.475: 99.8668% ( 1) 00:12:53.268 7.475 - 7.526: 99.8726% ( 1) 00:12:53.268 7.526 - 7.578: 99.8783% ( 1) 00:12:53.268 8.141 - 8.192: 99.8841% ( 1) 00:12:53.268 9.267 - 9.318: 99.8899% ( 1) 00:12:53.268 9.626 - 9.677: 99.8957% ( 1) 00:12:53.268 13.414 - 13.517: 99.9015% ( 1) 00:12:53.268 3984.589 - 4010.803: 100.0000% ( 17) 00:12:53.268 00:12:53.268 Complete histogram 00:12:53.268 ================== 00:12:53.268 Range in us Cumulative Count 00:12:53.268 1.677 - 1.690: 0.0058% ( 1) 00:12:53.268 1.702 - 1.715: 0.0290% ( 4) 00:12:53.268 1.715 - 1.728: 5.2601% ( 903) 00:12:53.268 1.728 - 1.741: 27.3781% ( 3818) 00:12:53.268 1.741 - 1.754: 34.9554% ( 1308) 00:12:53.268 1.754 - 1.766: 36.8961% ( 335) 00:12:53.268 1.766 - 1.779: 41.3973% ( 777) 00:12:53.268 1.779 - 1.792: 72.7262% ( 5408) 00:12:53.268 1.792 - 1.805: 91.8897% ( 3308) 00:12:53.268 1.805 - 1.818: 96.1302% ( 732) 00:12:53.268 1.818 - 1.830: 97.7407% ( 278) 00:12:53.268 1.830 - 1.843: 98.0304% ( 50) 00:12:53.268 1.843 - 1.856: 98.3026% ( 47) 00:12:53.268 1.856 - 1.869: 98.7024% ( 69) 00:12:53.268 1.869 - 1.882: 99.0673% ( 63) 00:12:53.268 1.882 - 1.894: 99.2179% ( 26) 00:12:53.268 1.894 - 1.907: 99.2469% ( 5) 00:12:53.268 1.907 - 1.920: 99.2701% ( 4) 00:12:53.268 1.920 - 1.933: 99.2759% ( 1) 00:12:53.268 1.933 - 1.946: 99.2990% ( 4) 00:12:53.268 1.946 - 1.958: 99.3164% ( 3) 00:12:53.268 1.958 - 1.971: 99.3396% ( 4) 00:12:53.268 1.971 - 1.984: 99.3628% ( 4) 00:12:53.268 1.984 - 1.997: 99.3801% ( 3) 00:12:53.268 1.997 - 2.010: 99.3859% ( 1) 00:12:53.268 2.010 - 2.022: 99.3975% ( 2) 00:12:53.268 2.074 - 2.086: 99.4033% ( 1) 00:12:53.268 2.086 - 2.099: 99.4091% ( 1) 00:12:53.268 2.099 - 2.112: 99.4149% ( 1) 00:12:53.268 2.163 - 2.176: 99.4207% ( 1) 00:12:53.268 2.176 - 2.189: 99.4265% ( 1) 00:12:53.268 2.445 - 2.458: 99.4323% ( 1) 00:12:53.268 3.789 - 3.814: 99.4381% ( 1) 00:12:53.268 3.840 - 3.866: 99.4439% ( 1) 00:12:53.268 3.968 - 3.994: 99.4497% ( 1) 00:12:53.268 4.198 - 4.224: 99.4612% ( 2) 00:12:53.268 4.250 - 4.275: 99.4670% ( 1) 00:12:53.268 4.403 - 4.429: 99.4728% ( 1) 00:12:53.268 4.429 - 4.454: 99.4786% ( 1) 00:12:53.268 4.736 - 4.762: 99.4844% ( 1) 00:12:53.268 4.864 - 4.890: 99.4902% ( 1) 00:12:53.268 4.966 - 4.992: 99.4960% ( 1) 00:12:53.268 5.018 - 5.043: 99.5076% ( 2) 00:12:53.268 5.043 - 5.069: 99.5134% ( 1) 00:12:53.268 5.402 - 5.427: 99.5192% ( 1) 00:12:53.268 5.555 - 5.581: 99.5250% ( 1) 00:12:53.268 5.632 - 5.658: 99.5308% ( 1) 00:12:53.268 5.709 - 5.734: 99.5366% ( 1) 00:12:53.268 5.888 - 5.914: 99.5423% ( 1) 00:12:53.268 6.144 - 6.170: 99.5481% ( 1) 00:12:53.268 6.554 - 6.605: 99.5539% ( 1) 00:12:53.268 8.038 - 8.090: 99.5597% ( 1) 00:12:53.268 3106.406 - 3119.514: 99.5655% ( 1) 00:12:53.268 3984.589 - 4010.803: 100.0000% ( 75) 00:12:53.268 00:12:53.268 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:53.268 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:53.268 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:53.268 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:53.268 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.268 [ 00:12:53.268 { 00:12:53.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.268 "subtype": "Discovery", 00:12:53.268 "listen_addresses": [], 00:12:53.268 "allow_any_host": true, 00:12:53.268 "hosts": [] 00:12:53.268 }, 00:12:53.268 { 00:12:53.268 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.268 "subtype": "NVMe", 00:12:53.268 "listen_addresses": [ 00:12:53.268 { 00:12:53.268 "trtype": "VFIOUSER", 00:12:53.268 "adrfam": "IPv4", 00:12:53.268 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.268 "trsvcid": "0" 00:12:53.268 } 00:12:53.268 ], 00:12:53.268 "allow_any_host": true, 00:12:53.269 "hosts": [], 00:12:53.269 "serial_number": "SPDK1", 00:12:53.269 "model_number": "SPDK bdev Controller", 00:12:53.269 "max_namespaces": 32, 00:12:53.269 "min_cntlid": 1, 00:12:53.269 "max_cntlid": 65519, 00:12:53.269 "namespaces": [ 00:12:53.269 { 00:12:53.269 "nsid": 1, 00:12:53.269 "bdev_name": "Malloc1", 00:12:53.269 "name": "Malloc1", 00:12:53.269 "nguid": "11A09C8A03DC4A0B8714530861AF0A51", 00:12:53.269 "uuid": "11a09c8a-03dc-4a0b-8714-530861af0a51" 00:12:53.269 } 00:12:53.269 ] 00:12:53.269 }, 00:12:53.269 { 00:12:53.269 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.269 "subtype": "NVMe", 00:12:53.269 "listen_addresses": [ 00:12:53.269 { 00:12:53.269 "trtype": "VFIOUSER", 00:12:53.269 "adrfam": "IPv4", 00:12:53.269 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.269 "trsvcid": "0" 00:12:53.269 } 00:12:53.269 ], 00:12:53.269 "allow_any_host": true, 00:12:53.269 "hosts": [], 00:12:53.269 "serial_number": "SPDK2", 00:12:53.269 "model_number": "SPDK bdev Controller", 00:12:53.269 "max_namespaces": 32, 00:12:53.269 "min_cntlid": 1, 00:12:53.269 "max_cntlid": 65519, 00:12:53.269 "namespaces": [ 00:12:53.269 { 00:12:53.269 "nsid": 1, 00:12:53.269 "bdev_name": "Malloc2", 00:12:53.269 "name": "Malloc2", 00:12:53.269 "nguid": "FA6D819008454D23B5C490F81F4D4C94", 00:12:53.269 "uuid": "fa6d8190-0845-4d23-b5c4-90f81f4d4c94" 00:12:53.269 } 00:12:53.269 ] 00:12:53.269 } 00:12:53.269 ] 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2140290 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:53.269 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:53.269 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.269 [2024-06-10 12:01:42.761865] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.269 Malloc3 00:12:53.527 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:53.528 [2024-06-10 12:01:42.958266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.528 12:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.528 Asynchronous Event Request test 00:12:53.528 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.528 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.528 Registering asynchronous event callbacks... 00:12:53.528 Starting namespace attribute notice tests for all controllers... 00:12:53.528 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:53.528 aer_cb - Changed Namespace 00:12:53.528 Cleaning up... 00:12:53.787 [ 00:12:53.787 { 00:12:53.787 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.787 "subtype": "Discovery", 00:12:53.787 "listen_addresses": [], 00:12:53.787 "allow_any_host": true, 00:12:53.787 "hosts": [] 00:12:53.787 }, 00:12:53.787 { 00:12:53.787 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.787 "subtype": "NVMe", 00:12:53.787 "listen_addresses": [ 00:12:53.787 { 00:12:53.787 "trtype": "VFIOUSER", 00:12:53.787 "adrfam": "IPv4", 00:12:53.787 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.787 "trsvcid": "0" 00:12:53.787 } 00:12:53.787 ], 00:12:53.787 "allow_any_host": true, 00:12:53.787 "hosts": [], 00:12:53.787 "serial_number": "SPDK1", 00:12:53.787 "model_number": "SPDK bdev Controller", 00:12:53.787 "max_namespaces": 32, 00:12:53.787 "min_cntlid": 1, 00:12:53.787 "max_cntlid": 65519, 00:12:53.787 "namespaces": [ 00:12:53.787 { 00:12:53.787 "nsid": 1, 00:12:53.787 "bdev_name": "Malloc1", 00:12:53.787 "name": "Malloc1", 00:12:53.787 "nguid": "11A09C8A03DC4A0B8714530861AF0A51", 00:12:53.787 "uuid": "11a09c8a-03dc-4a0b-8714-530861af0a51" 00:12:53.787 }, 00:12:53.787 { 00:12:53.787 "nsid": 2, 00:12:53.787 "bdev_name": "Malloc3", 00:12:53.787 "name": "Malloc3", 00:12:53.787 "nguid": "C3004BBCA4B84E33A5E1CE34E0420A99", 00:12:53.787 "uuid": "c3004bbc-a4b8-4e33-a5e1-ce34e0420a99" 00:12:53.787 } 00:12:53.787 ] 00:12:53.787 }, 00:12:53.787 { 00:12:53.787 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.787 "subtype": "NVMe", 00:12:53.787 "listen_addresses": [ 00:12:53.787 { 00:12:53.787 "trtype": "VFIOUSER", 00:12:53.787 "adrfam": "IPv4", 00:12:53.788 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.788 "trsvcid": "0" 00:12:53.788 } 00:12:53.788 ], 00:12:53.788 "allow_any_host": true, 00:12:53.788 "hosts": [], 00:12:53.788 "serial_number": "SPDK2", 00:12:53.788 "model_number": "SPDK bdev Controller", 00:12:53.788 "max_namespaces": 32, 00:12:53.788 "min_cntlid": 1, 00:12:53.788 "max_cntlid": 65519, 00:12:53.788 "namespaces": [ 00:12:53.788 { 00:12:53.788 "nsid": 1, 00:12:53.788 "bdev_name": "Malloc2", 00:12:53.788 "name": "Malloc2", 00:12:53.788 "nguid": "FA6D819008454D23B5C490F81F4D4C94", 00:12:53.788 "uuid": "fa6d8190-0845-4d23-b5c4-90f81f4d4c94" 00:12:53.788 } 00:12:53.788 ] 00:12:53.788 } 00:12:53.788 ] 00:12:53.788 12:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2140290 00:12:53.788 12:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.788 12:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:53.788 12:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:53.788 12:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:53.788 [2024-06-10 12:01:43.186048] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:12:53.788 [2024-06-10 12:01:43.186085] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140332 ] 00:12:53.788 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.788 [2024-06-10 12:01:43.217689] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:53.788 [2024-06-10 12:01:43.229080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:53.788 [2024-06-10 12:01:43.229101] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1709f26000 00:12:53.788 [2024-06-10 12:01:43.230078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.231080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.232081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.233089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.234099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.235105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.236108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.237118] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:53.788 [2024-06-10 12:01:43.238121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:53.788 [2024-06-10 12:01:43.238135] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1709f1b000 00:12:53.788 [2024-06-10 12:01:43.239024] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:53.788 [2024-06-10 12:01:43.250226] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:53.788 [2024-06-10 12:01:43.250250] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:53.788 [2024-06-10 12:01:43.252309] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:53.788 [2024-06-10 12:01:43.252345] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:53.788 [2024-06-10 12:01:43.252410] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:53.788 [2024-06-10 12:01:43.252427] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:53.788 [2024-06-10 12:01:43.252434] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:53.788 [2024-06-10 12:01:43.253313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:53.788 [2024-06-10 12:01:43.253324] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:53.788 [2024-06-10 12:01:43.253333] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:53.788 [2024-06-10 12:01:43.254319] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:53.788 [2024-06-10 12:01:43.254330] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:53.788 [2024-06-10 12:01:43.254339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:53.788 [2024-06-10 12:01:43.255325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:53.788 [2024-06-10 12:01:43.255335] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:53.788 [2024-06-10 12:01:43.256337] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:53.788 [2024-06-10 12:01:43.256347] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:53.788 [2024-06-10 12:01:43.256353] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:53.788 [2024-06-10 12:01:43.256361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:53.788 [2024-06-10 12:01:43.256468] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:53.788 [2024-06-10 12:01:43.256474] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:53.788 [2024-06-10 12:01:43.256484] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:53.788 [2024-06-10 12:01:43.257344] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:53.788 [2024-06-10 12:01:43.258345] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:53.788 [2024-06-10 12:01:43.259358] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:53.788 [2024-06-10 12:01:43.260355] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.788 [2024-06-10 12:01:43.260396] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:53.788 [2024-06-10 12:01:43.261364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:53.788 [2024-06-10 12:01:43.261374] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:53.788 [2024-06-10 12:01:43.261380] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:53.788 [2024-06-10 12:01:43.261399] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:53.788 [2024-06-10 12:01:43.261408] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:53.788 [2024-06-10 12:01:43.261422] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:53.788 [2024-06-10 12:01:43.261429] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:53.788 [2024-06-10 12:01:43.261441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:53.788 [2024-06-10 12:01:43.267484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:53.788 [2024-06-10 12:01:43.267496] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:53.788 [2024-06-10 12:01:43.267502] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:53.788 [2024-06-10 12:01:43.267508] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:53.788 [2024-06-10 12:01:43.267517] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:53.788 [2024-06-10 12:01:43.267525] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:53.788 [2024-06-10 12:01:43.267531] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:53.788 [2024-06-10 12:01:43.267537] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:53.788 [2024-06-10 12:01:43.267546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:53.788 [2024-06-10 12:01:43.267556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:53.788 [2024-06-10 12:01:43.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:53.788 [2024-06-10 12:01:43.275503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.788 [2024-06-10 12:01:43.275513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.788 [2024-06-10 12:01:43.275522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.788 [2024-06-10 12:01:43.275531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.788 [2024-06-10 12:01:43.275537] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.275548] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.275558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:53.789 [2024-06-10 12:01:43.283483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:53.789 [2024-06-10 12:01:43.283493] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:53.789 [2024-06-10 12:01:43.283499] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.283508] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.283514] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.283524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:53.789 [2024-06-10 12:01:43.291483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:53.789 [2024-06-10 12:01:43.291528] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.291538] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.291546] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:53.789 [2024-06-10 12:01:43.291552] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:53.789 [2024-06-10 12:01:43.291559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:53.789 [2024-06-10 12:01:43.299481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:53.789 [2024-06-10 12:01:43.299497] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:53.789 [2024-06-10 12:01:43.299509] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.299518] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:53.789 [2024-06-10 12:01:43.299526] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:53.789 [2024-06-10 12:01:43.299532] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:53.789 [2024-06-10 12:01:43.299539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.049 [2024-06-10 12:01:43.307484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:54.049 [2024-06-10 12:01:43.307500] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.307509] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.307517] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.049 [2024-06-10 12:01:43.307523] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.049 [2024-06-10 12:01:43.307531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.049 [2024-06-10 12:01:43.315481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:54.049 [2024-06-10 12:01:43.315492] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.315500] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.315509] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.315516] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.315522] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.315529] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:54.049 [2024-06-10 12:01:43.315535] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:54.049 [2024-06-10 12:01:43.315541] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:54.049 [2024-06-10 12:01:43.315560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:54.049 [2024-06-10 12:01:43.323483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:54.049 [2024-06-10 12:01:43.323498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:54.049 [2024-06-10 12:01:43.331481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:54.049 [2024-06-10 12:01:43.331496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:54.049 [2024-06-10 12:01:43.339483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:54.049 [2024-06-10 12:01:43.339498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.049 [2024-06-10 12:01:43.347480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:54.049 [2024-06-10 12:01:43.347495] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:54.049 [2024-06-10 12:01:43.347501] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:54.049 [2024-06-10 12:01:43.347506] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:54.050 [2024-06-10 12:01:43.347511] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:54.050 [2024-06-10 12:01:43.347518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:54.050 [2024-06-10 12:01:43.347526] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:54.050 [2024-06-10 12:01:43.347531] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:54.050 [2024-06-10 12:01:43.347538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:54.050 [2024-06-10 12:01:43.347546] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:54.050 [2024-06-10 12:01:43.347552] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.050 [2024-06-10 12:01:43.347558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.050 [2024-06-10 12:01:43.347566] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:54.050 [2024-06-10 12:01:43.347572] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:54.050 [2024-06-10 12:01:43.347579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:54.050 [2024-06-10 12:01:43.355482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:54.050 [2024-06-10 12:01:43.355499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:54.050 [2024-06-10 12:01:43.355509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:54.050 [2024-06-10 12:01:43.355520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:54.050 ===================================================== 00:12:54.050 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.050 ===================================================== 00:12:54.050 Controller Capabilities/Features 00:12:54.050 ================================ 00:12:54.050 Vendor ID: 4e58 00:12:54.050 Subsystem Vendor ID: 4e58 00:12:54.050 Serial Number: SPDK2 00:12:54.050 Model Number: SPDK bdev Controller 00:12:54.050 Firmware Version: 24.09 00:12:54.050 Recommended Arb Burst: 6 00:12:54.050 IEEE OUI Identifier: 8d 6b 50 00:12:54.050 Multi-path I/O 00:12:54.050 May have multiple subsystem ports: Yes 00:12:54.050 May have multiple controllers: Yes 00:12:54.050 Associated with SR-IOV VF: No 00:12:54.050 Max Data Transfer Size: 131072 00:12:54.050 Max Number of Namespaces: 32 00:12:54.050 Max Number of I/O Queues: 127 00:12:54.050 NVMe Specification Version (VS): 1.3 00:12:54.050 NVMe Specification Version (Identify): 1.3 00:12:54.050 Maximum Queue Entries: 256 00:12:54.050 Contiguous Queues Required: Yes 00:12:54.050 Arbitration Mechanisms Supported 00:12:54.050 Weighted Round Robin: Not Supported 00:12:54.050 Vendor Specific: Not Supported 00:12:54.050 Reset Timeout: 15000 ms 00:12:54.050 Doorbell Stride: 4 bytes 00:12:54.050 NVM Subsystem Reset: Not Supported 00:12:54.050 Command Sets Supported 00:12:54.050 NVM Command Set: Supported 00:12:54.050 Boot Partition: Not Supported 00:12:54.050 Memory Page Size Minimum: 4096 bytes 00:12:54.050 Memory Page Size Maximum: 4096 bytes 00:12:54.050 Persistent Memory Region: Not Supported 00:12:54.050 Optional Asynchronous Events Supported 00:12:54.050 Namespace Attribute Notices: Supported 00:12:54.050 Firmware Activation Notices: Not Supported 00:12:54.050 ANA Change Notices: Not Supported 00:12:54.050 PLE Aggregate Log Change Notices: Not Supported 00:12:54.050 LBA Status Info Alert Notices: Not Supported 00:12:54.050 EGE Aggregate Log Change Notices: Not Supported 00:12:54.050 Normal NVM Subsystem Shutdown event: Not Supported 00:12:54.050 Zone Descriptor Change Notices: Not Supported 00:12:54.050 Discovery Log Change Notices: Not Supported 00:12:54.050 Controller Attributes 00:12:54.050 128-bit Host Identifier: Supported 00:12:54.050 Non-Operational Permissive Mode: Not Supported 00:12:54.050 NVM Sets: Not Supported 00:12:54.050 Read Recovery Levels: Not Supported 00:12:54.050 Endurance Groups: Not Supported 00:12:54.050 Predictable Latency Mode: Not Supported 00:12:54.050 Traffic Based Keep ALive: Not Supported 00:12:54.050 Namespace Granularity: Not Supported 00:12:54.050 SQ Associations: Not Supported 00:12:54.050 UUID List: Not Supported 00:12:54.050 Multi-Domain Subsystem: Not Supported 00:12:54.050 Fixed Capacity Management: Not Supported 00:12:54.050 Variable Capacity Management: Not Supported 00:12:54.050 Delete Endurance Group: Not Supported 00:12:54.050 Delete NVM Set: Not Supported 00:12:54.050 Extended LBA Formats Supported: Not Supported 00:12:54.050 Flexible Data Placement Supported: Not Supported 00:12:54.050 00:12:54.050 Controller Memory Buffer Support 00:12:54.050 ================================ 00:12:54.050 Supported: No 00:12:54.050 00:12:54.050 Persistent Memory Region Support 00:12:54.050 ================================ 00:12:54.050 Supported: No 00:12:54.050 00:12:54.050 Admin Command Set Attributes 00:12:54.050 ============================ 00:12:54.050 Security Send/Receive: Not Supported 00:12:54.050 Format NVM: Not Supported 00:12:54.050 Firmware Activate/Download: Not Supported 00:12:54.050 Namespace Management: Not Supported 00:12:54.050 Device Self-Test: Not Supported 00:12:54.050 Directives: Not Supported 00:12:54.050 NVMe-MI: Not Supported 00:12:54.050 Virtualization Management: Not Supported 00:12:54.050 Doorbell Buffer Config: Not Supported 00:12:54.050 Get LBA Status Capability: Not Supported 00:12:54.050 Command & Feature Lockdown Capability: Not Supported 00:12:54.050 Abort Command Limit: 4 00:12:54.050 Async Event Request Limit: 4 00:12:54.050 Number of Firmware Slots: N/A 00:12:54.050 Firmware Slot 1 Read-Only: N/A 00:12:54.050 Firmware Activation Without Reset: N/A 00:12:54.050 Multiple Update Detection Support: N/A 00:12:54.050 Firmware Update Granularity: No Information Provided 00:12:54.050 Per-Namespace SMART Log: No 00:12:54.050 Asymmetric Namespace Access Log Page: Not Supported 00:12:54.050 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:54.050 Command Effects Log Page: Supported 00:12:54.050 Get Log Page Extended Data: Supported 00:12:54.050 Telemetry Log Pages: Not Supported 00:12:54.050 Persistent Event Log Pages: Not Supported 00:12:54.050 Supported Log Pages Log Page: May Support 00:12:54.050 Commands Supported & Effects Log Page: Not Supported 00:12:54.050 Feature Identifiers & Effects Log Page:May Support 00:12:54.050 NVMe-MI Commands & Effects Log Page: May Support 00:12:54.050 Data Area 4 for Telemetry Log: Not Supported 00:12:54.050 Error Log Page Entries Supported: 128 00:12:54.050 Keep Alive: Supported 00:12:54.050 Keep Alive Granularity: 10000 ms 00:12:54.050 00:12:54.050 NVM Command Set Attributes 00:12:54.050 ========================== 00:12:54.050 Submission Queue Entry Size 00:12:54.050 Max: 64 00:12:54.050 Min: 64 00:12:54.050 Completion Queue Entry Size 00:12:54.050 Max: 16 00:12:54.050 Min: 16 00:12:54.050 Number of Namespaces: 32 00:12:54.050 Compare Command: Supported 00:12:54.050 Write Uncorrectable Command: Not Supported 00:12:54.050 Dataset Management Command: Supported 00:12:54.050 Write Zeroes Command: Supported 00:12:54.050 Set Features Save Field: Not Supported 00:12:54.050 Reservations: Not Supported 00:12:54.050 Timestamp: Not Supported 00:12:54.050 Copy: Supported 00:12:54.050 Volatile Write Cache: Present 00:12:54.050 Atomic Write Unit (Normal): 1 00:12:54.050 Atomic Write Unit (PFail): 1 00:12:54.050 Atomic Compare & Write Unit: 1 00:12:54.050 Fused Compare & Write: Supported 00:12:54.050 Scatter-Gather List 00:12:54.050 SGL Command Set: Supported (Dword aligned) 00:12:54.050 SGL Keyed: Not Supported 00:12:54.050 SGL Bit Bucket Descriptor: Not Supported 00:12:54.050 SGL Metadata Pointer: Not Supported 00:12:54.050 Oversized SGL: Not Supported 00:12:54.050 SGL Metadata Address: Not Supported 00:12:54.050 SGL Offset: Not Supported 00:12:54.050 Transport SGL Data Block: Not Supported 00:12:54.050 Replay Protected Memory Block: Not Supported 00:12:54.050 00:12:54.050 Firmware Slot Information 00:12:54.050 ========================= 00:12:54.050 Active slot: 1 00:12:54.050 Slot 1 Firmware Revision: 24.09 00:12:54.050 00:12:54.050 00:12:54.050 Commands Supported and Effects 00:12:54.050 ============================== 00:12:54.050 Admin Commands 00:12:54.050 -------------- 00:12:54.050 Get Log Page (02h): Supported 00:12:54.050 Identify (06h): Supported 00:12:54.050 Abort (08h): Supported 00:12:54.050 Set Features (09h): Supported 00:12:54.050 Get Features (0Ah): Supported 00:12:54.050 Asynchronous Event Request (0Ch): Supported 00:12:54.050 Keep Alive (18h): Supported 00:12:54.050 I/O Commands 00:12:54.050 ------------ 00:12:54.050 Flush (00h): Supported LBA-Change 00:12:54.050 Write (01h): Supported LBA-Change 00:12:54.050 Read (02h): Supported 00:12:54.050 Compare (05h): Supported 00:12:54.051 Write Zeroes (08h): Supported LBA-Change 00:12:54.051 Dataset Management (09h): Supported LBA-Change 00:12:54.051 Copy (19h): Supported LBA-Change 00:12:54.051 Unknown (79h): Supported LBA-Change 00:12:54.051 Unknown (7Ah): Supported 00:12:54.051 00:12:54.051 Error Log 00:12:54.051 ========= 00:12:54.051 00:12:54.051 Arbitration 00:12:54.051 =========== 00:12:54.051 Arbitration Burst: 1 00:12:54.051 00:12:54.051 Power Management 00:12:54.051 ================ 00:12:54.051 Number of Power States: 1 00:12:54.051 Current Power State: Power State #0 00:12:54.051 Power State #0: 00:12:54.051 Max Power: 0.00 W 00:12:54.051 Non-Operational State: Operational 00:12:54.051 Entry Latency: Not Reported 00:12:54.051 Exit Latency: Not Reported 00:12:54.051 Relative Read Throughput: 0 00:12:54.051 Relative Read Latency: 0 00:12:54.051 Relative Write Throughput: 0 00:12:54.051 Relative Write Latency: 0 00:12:54.051 Idle Power: Not Reported 00:12:54.051 Active Power: Not Reported 00:12:54.051 Non-Operational Permissive Mode: Not Supported 00:12:54.051 00:12:54.051 Health Information 00:12:54.051 ================== 00:12:54.051 Critical Warnings: 00:12:54.051 Available Spare Space: OK 00:12:54.051 Temperature: OK 00:12:54.051 Device Reliability: OK 00:12:54.051 Read Only: No 00:12:54.051 Volatile Memory Backup: OK 00:12:54.051 Current Temperature: 0 Kelvin (-2[2024-06-10 12:01:43.355612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:54.051 [2024-06-10 12:01:43.363481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:54.051 [2024-06-10 12:01:43.363510] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:54.051 [2024-06-10 12:01:43.363520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.051 [2024-06-10 12:01:43.363528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.051 [2024-06-10 12:01:43.363537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.051 [2024-06-10 12:01:43.363545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.051 [2024-06-10 12:01:43.363599] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:54.051 [2024-06-10 12:01:43.363611] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:54.051 [2024-06-10 12:01:43.364606] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.051 [2024-06-10 12:01:43.364650] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:54.051 [2024-06-10 12:01:43.364658] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:54.051 [2024-06-10 12:01:43.365607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:54.051 [2024-06-10 12:01:43.365620] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:54.051 [2024-06-10 12:01:43.365666] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:54.051 [2024-06-10 12:01:43.368481] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.051 73 Celsius) 00:12:54.051 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:54.051 Available Spare: 0% 00:12:54.051 Available Spare Threshold: 0% 00:12:54.051 Life Percentage Used: 0% 00:12:54.051 Data Units Read: 0 00:12:54.051 Data Units Written: 0 00:12:54.051 Host Read Commands: 0 00:12:54.051 Host Write Commands: 0 00:12:54.051 Controller Busy Time: 0 minutes 00:12:54.051 Power Cycles: 0 00:12:54.051 Power On Hours: 0 hours 00:12:54.051 Unsafe Shutdowns: 0 00:12:54.051 Unrecoverable Media Errors: 0 00:12:54.051 Lifetime Error Log Entries: 0 00:12:54.051 Warning Temperature Time: 0 minutes 00:12:54.051 Critical Temperature Time: 0 minutes 00:12:54.051 00:12:54.051 Number of Queues 00:12:54.051 ================ 00:12:54.051 Number of I/O Submission Queues: 127 00:12:54.051 Number of I/O Completion Queues: 127 00:12:54.051 00:12:54.051 Active Namespaces 00:12:54.051 ================= 00:12:54.051 Namespace ID:1 00:12:54.051 Error Recovery Timeout: Unlimited 00:12:54.051 Command Set Identifier: NVM (00h) 00:12:54.051 Deallocate: Supported 00:12:54.051 Deallocated/Unwritten Error: Not Supported 00:12:54.051 Deallocated Read Value: Unknown 00:12:54.051 Deallocate in Write Zeroes: Not Supported 00:12:54.051 Deallocated Guard Field: 0xFFFF 00:12:54.051 Flush: Supported 00:12:54.051 Reservation: Supported 00:12:54.051 Namespace Sharing Capabilities: Multiple Controllers 00:12:54.051 Size (in LBAs): 131072 (0GiB) 00:12:54.051 Capacity (in LBAs): 131072 (0GiB) 00:12:54.051 Utilization (in LBAs): 131072 (0GiB) 00:12:54.051 NGUID: FA6D819008454D23B5C490F81F4D4C94 00:12:54.051 UUID: fa6d8190-0845-4d23-b5c4-90f81f4d4c94 00:12:54.051 Thin Provisioning: Not Supported 00:12:54.051 Per-NS Atomic Units: Yes 00:12:54.051 Atomic Boundary Size (Normal): 0 00:12:54.051 Atomic Boundary Size (PFail): 0 00:12:54.051 Atomic Boundary Offset: 0 00:12:54.051 Maximum Single Source Range Length: 65535 00:12:54.051 Maximum Copy Length: 65535 00:12:54.051 Maximum Source Range Count: 1 00:12:54.051 NGUID/EUI64 Never Reused: No 00:12:54.051 Namespace Write Protected: No 00:12:54.051 Number of LBA Formats: 1 00:12:54.051 Current LBA Format: LBA Format #00 00:12:54.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:54.051 00:12:54.051 12:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:54.051 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.051 [2024-06-10 12:01:43.565438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.323 Initializing NVMe Controllers 00:12:59.323 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.323 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:59.323 Initialization complete. Launching workers. 00:12:59.323 ======================================================== 00:12:59.323 Latency(us) 00:12:59.323 Device Information : IOPS MiB/s Average min max 00:12:59.323 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39990.93 156.21 3201.10 916.98 6703.83 00:12:59.323 ======================================================== 00:12:59.323 Total : 39990.93 156.21 3201.10 916.98 6703.83 00:12:59.323 00:12:59.323 [2024-06-10 12:01:48.668736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.323 12:01:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:59.323 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.582 [2024-06-10 12:01:48.884363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.852 Initializing NVMe Controllers 00:13:04.853 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:04.853 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:04.853 Initialization complete. Launching workers. 00:13:04.853 ======================================================== 00:13:04.853 Latency(us) 00:13:04.853 Device Information : IOPS MiB/s Average min max 00:13:04.853 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39949.12 156.05 3204.22 929.82 10659.77 00:13:04.853 ======================================================== 00:13:04.853 Total : 39949.12 156.05 3204.22 929.82 10659.77 00:13:04.853 00:13:04.853 [2024-06-10 12:01:53.909116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.853 12:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:04.853 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.853 [2024-06-10 12:01:54.120157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.128 [2024-06-10 12:01:59.270575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.128 Initializing NVMe Controllers 00:13:10.128 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.128 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:10.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:10.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:10.128 Initialization complete. Launching workers. 00:13:10.128 Starting thread on core 2 00:13:10.128 Starting thread on core 3 00:13:10.128 Starting thread on core 1 00:13:10.128 12:01:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:10.128 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.128 [2024-06-10 12:01:59.573894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.421 [2024-06-10 12:02:02.662925] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.421 Initializing NVMe Controllers 00:13:13.421 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.421 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.421 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:13.421 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:13.421 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:13.421 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:13.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:13.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:13.421 Initialization complete. Launching workers. 00:13:13.421 Starting thread on core 1 with urgent priority queue 00:13:13.421 Starting thread on core 2 with urgent priority queue 00:13:13.421 Starting thread on core 3 with urgent priority queue 00:13:13.421 Starting thread on core 0 with urgent priority queue 00:13:13.421 SPDK bdev Controller (SPDK2 ) core 0: 8317.33 IO/s 12.02 secs/100000 ios 00:13:13.421 SPDK bdev Controller (SPDK2 ) core 1: 7671.33 IO/s 13.04 secs/100000 ios 00:13:13.421 SPDK bdev Controller (SPDK2 ) core 2: 7656.33 IO/s 13.06 secs/100000 ios 00:13:13.421 SPDK bdev Controller (SPDK2 ) core 3: 9311.33 IO/s 10.74 secs/100000 ios 00:13:13.421 ======================================================== 00:13:13.421 00:13:13.421 12:02:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:13.421 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.680 [2024-06-10 12:02:02.947598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.680 Initializing NVMe Controllers 00:13:13.680 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.681 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.681 Namespace ID: 1 size: 0GB 00:13:13.681 Initialization complete. 00:13:13.681 INFO: using host memory buffer for IO 00:13:13.681 Hello world! 00:13:13.681 [2024-06-10 12:02:02.959668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.681 12:02:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:13.681 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.940 [2024-06-10 12:02:03.239258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.877 Initializing NVMe Controllers 00:13:14.877 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.877 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.877 Initialization complete. Launching workers. 00:13:14.877 submit (in ns) avg, min, max = 6081.4, 3084.8, 3999772.8 00:13:14.877 complete (in ns) avg, min, max = 19169.2, 1716.0, 7986868.8 00:13:14.877 00:13:14.877 Submit histogram 00:13:14.877 ================ 00:13:14.877 Range in us Cumulative Count 00:13:14.877 3.085 - 3.098: 0.0464% ( 8) 00:13:14.877 3.098 - 3.110: 0.1740% ( 22) 00:13:14.877 3.110 - 3.123: 0.3075% ( 23) 00:13:14.877 3.123 - 3.136: 0.6440% ( 58) 00:13:14.877 3.136 - 3.149: 1.3866% ( 128) 00:13:14.877 3.149 - 3.162: 2.7789% ( 240) 00:13:14.877 3.162 - 3.174: 4.7572% ( 341) 00:13:14.877 3.174 - 3.187: 7.7044% ( 508) 00:13:14.877 3.187 - 3.200: 11.7306% ( 694) 00:13:14.877 3.200 - 3.213: 16.5400% ( 829) 00:13:14.877 3.213 - 3.226: 21.8309% ( 912) 00:13:14.877 3.226 - 3.238: 27.0813% ( 905) 00:13:14.877 3.238 - 3.251: 32.5637% ( 945) 00:13:14.877 3.251 - 3.264: 37.8024% ( 903) 00:13:14.877 3.264 - 3.277: 43.5923% ( 998) 00:13:14.877 3.277 - 3.302: 54.1510% ( 1820) 00:13:14.877 3.302 - 3.328: 62.5805% ( 1453) 00:13:14.877 3.328 - 3.354: 69.8904% ( 1260) 00:13:14.877 3.354 - 3.379: 76.6549% ( 1166) 00:13:14.877 3.379 - 3.405: 82.0154% ( 924) 00:13:14.877 3.405 - 3.430: 86.8655% ( 836) 00:13:14.877 3.430 - 3.456: 88.3332% ( 253) 00:13:14.877 3.456 - 3.482: 88.9772% ( 111) 00:13:14.877 3.482 - 3.507: 89.7720% ( 137) 00:13:14.877 3.507 - 3.533: 91.0599% ( 222) 00:13:14.877 3.533 - 3.558: 92.4813% ( 245) 00:13:14.877 3.558 - 3.584: 94.1173% ( 282) 00:13:14.877 3.584 - 3.610: 95.4981% ( 238) 00:13:14.877 3.610 - 3.635: 96.7860% ( 222) 00:13:14.877 3.635 - 3.661: 97.8767% ( 188) 00:13:14.877 3.661 - 3.686: 98.7585% ( 152) 00:13:14.877 3.686 - 3.712: 99.1646% ( 70) 00:13:14.877 3.712 - 3.738: 99.4082% ( 42) 00:13:14.877 3.738 - 3.763: 99.5475% ( 24) 00:13:14.877 3.763 - 3.789: 99.6287% ( 14) 00:13:14.877 3.789 - 3.814: 99.6461% ( 3) 00:13:14.877 3.814 - 3.840: 99.6519% ( 1) 00:13:14.877 3.866 - 3.891: 99.6577% ( 1) 00:13:14.877 3.917 - 3.942: 99.6635% ( 1) 00:13:14.877 4.147 - 4.173: 99.6693% ( 1) 00:13:14.877 5.325 - 5.350: 99.6751% ( 1) 00:13:14.877 5.658 - 5.683: 99.6809% ( 1) 00:13:14.877 5.683 - 5.709: 99.6867% ( 1) 00:13:14.877 5.837 - 5.862: 99.6983% ( 2) 00:13:14.877 5.862 - 5.888: 99.7041% ( 1) 00:13:14.877 5.990 - 6.016: 99.7099% ( 1) 00:13:14.877 6.016 - 6.042: 99.7157% ( 1) 00:13:14.877 6.067 - 6.093: 99.7273% ( 2) 00:13:14.877 6.093 - 6.118: 99.7331% ( 1) 00:13:14.877 6.144 - 6.170: 99.7505% ( 3) 00:13:14.877 6.170 - 6.195: 99.7621% ( 2) 00:13:14.877 6.195 - 6.221: 99.7679% ( 1) 00:13:14.877 6.246 - 6.272: 99.7737% ( 1) 00:13:14.877 6.272 - 6.298: 99.7795% ( 1) 00:13:14.877 6.298 - 6.323: 99.7911% ( 2) 00:13:14.877 6.349 - 6.374: 99.7969% ( 1) 00:13:14.877 6.477 - 6.502: 99.8086% ( 2) 00:13:14.877 6.528 - 6.554: 99.8144% ( 1) 00:13:14.877 6.554 - 6.605: 99.8202% ( 1) 00:13:14.877 6.605 - 6.656: 99.8318% ( 2) 00:13:14.877 6.656 - 6.707: 99.8434% ( 2) 00:13:14.877 6.707 - 6.758: 99.8550% ( 2) 00:13:14.877 6.810 - 6.861: 99.8608% ( 1) 00:13:14.877 6.912 - 6.963: 99.8840% ( 4) 00:13:14.877 7.117 - 7.168: 99.8956% ( 2) 00:13:14.877 7.322 - 7.373: 99.9014% ( 1) 00:13:14.877 7.424 - 7.475: 99.9072% ( 1) 00:13:14.877 8.141 - 8.192: 99.9130% ( 1) 00:13:14.877 9.267 - 9.318: 99.9188% ( 1) 00:13:14.877 11.878 - 11.930: 99.9246% ( 1) 00:13:14.877 16.077 - 16.179: 99.9304% ( 1) 00:13:14.877 3512.730 - 3538.944: 99.9362% ( 1) 00:13:14.877 3984.589 - 4010.803: 100.0000% ( 11) 00:13:14.877 00:13:14.877 Complete histogram 00:13:14.877 ================== 00:13:14.877 Range in us Cumulative Count 00:13:14.877 1.715 - 1.728: 0.9863% ( 170) 00:13:14.877 1.728 - 1.741: 7.9364% ( 1198) 00:13:14.877 1.741 - 1.754: 11.7132% ( 651) 00:13:14.877 1.754 - 1.766: 13.3260% ( 278) 00:13:14.877 1.766 - [2024-06-10 12:02:04.337322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.877 1.779: 29.9356% ( 2863) 00:13:14.877 1.779 - 1.792: 74.6186% ( 7702) 00:13:14.877 1.792 - 1.805: 90.0447% ( 2659) 00:13:14.877 1.805 - 1.818: 95.4632% ( 934) 00:13:14.877 1.818 - 1.830: 97.3777% ( 330) 00:13:14.877 1.830 - 1.843: 97.7954% ( 72) 00:13:14.877 1.843 - 1.856: 98.3002% ( 87) 00:13:14.877 1.856 - 1.869: 98.8165% ( 89) 00:13:14.877 1.869 - 1.882: 99.0544% ( 41) 00:13:14.877 1.882 - 1.894: 99.1704% ( 20) 00:13:14.878 1.894 - 1.907: 99.1994% ( 5) 00:13:14.878 1.907 - 1.920: 99.2110% ( 2) 00:13:14.878 1.920 - 1.933: 99.2458% ( 6) 00:13:14.878 1.933 - 1.946: 99.2632% ( 3) 00:13:14.878 1.946 - 1.958: 99.2806% ( 3) 00:13:14.878 1.958 - 1.971: 99.2980% ( 3) 00:13:14.878 1.971 - 1.984: 99.3096% ( 2) 00:13:14.878 1.984 - 1.997: 99.3212% ( 2) 00:13:14.878 1.997 - 2.010: 99.3444% ( 4) 00:13:14.878 2.010 - 2.022: 99.3502% ( 1) 00:13:14.878 2.035 - 2.048: 99.3560% ( 1) 00:13:14.878 2.048 - 2.061: 99.3676% ( 2) 00:13:14.878 2.074 - 2.086: 99.3734% ( 1) 00:13:14.878 2.406 - 2.419: 99.3792% ( 1) 00:13:14.878 3.814 - 3.840: 99.3850% ( 1) 00:13:14.878 3.840 - 3.866: 99.3908% ( 1) 00:13:14.878 4.070 - 4.096: 99.4024% ( 2) 00:13:14.878 4.147 - 4.173: 99.4082% ( 1) 00:13:14.878 4.173 - 4.198: 99.4141% ( 1) 00:13:14.878 4.301 - 4.326: 99.4199% ( 1) 00:13:14.878 4.454 - 4.480: 99.4257% ( 1) 00:13:14.878 4.506 - 4.531: 99.4315% ( 1) 00:13:14.878 4.685 - 4.710: 99.4373% ( 1) 00:13:14.878 4.762 - 4.787: 99.4431% ( 1) 00:13:14.878 4.787 - 4.813: 99.4489% ( 1) 00:13:14.878 4.838 - 4.864: 99.4547% ( 1) 00:13:14.878 4.966 - 4.992: 99.4605% ( 1) 00:13:14.878 5.248 - 5.274: 99.4663% ( 1) 00:13:14.878 5.350 - 5.376: 99.4779% ( 2) 00:13:14.878 5.376 - 5.402: 99.4895% ( 2) 00:13:14.878 5.402 - 5.427: 99.4953% ( 1) 00:13:14.878 5.478 - 5.504: 99.5011% ( 1) 00:13:14.878 5.555 - 5.581: 99.5069% ( 1) 00:13:14.878 5.606 - 5.632: 99.5127% ( 1) 00:13:14.878 5.760 - 5.786: 99.5185% ( 1) 00:13:14.878 5.786 - 5.811: 99.5243% ( 1) 00:13:14.878 5.811 - 5.837: 99.5359% ( 2) 00:13:14.878 5.862 - 5.888: 99.5417% ( 1) 00:13:14.878 6.298 - 6.323: 99.5475% ( 1) 00:13:14.878 6.810 - 6.861: 99.5533% ( 1) 00:13:14.878 8.448 - 8.499: 99.5591% ( 1) 00:13:14.878 10.752 - 10.803: 99.5649% ( 1) 00:13:14.878 937.165 - 943.718: 99.5707% ( 1) 00:13:14.878 1028.915 - 1035.469: 99.5765% ( 1) 00:13:14.878 2005.402 - 2018.509: 99.5823% ( 1) 00:13:14.878 2988.442 - 3001.549: 99.5881% ( 1) 00:13:14.878 3984.589 - 4010.803: 99.9826% ( 68) 00:13:14.878 5976.883 - 6003.098: 99.9884% ( 1) 00:13:14.878 6973.030 - 7025.459: 99.9942% ( 1) 00:13:14.878 7969.178 - 8021.606: 100.0000% ( 1) 00:13:14.878 00:13:14.878 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:14.878 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:14.878 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:14.878 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:14.878 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.138 [ 00:13:15.138 { 00:13:15.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.138 "subtype": "Discovery", 00:13:15.138 "listen_addresses": [], 00:13:15.138 "allow_any_host": true, 00:13:15.138 "hosts": [] 00:13:15.138 }, 00:13:15.138 { 00:13:15.138 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.138 "subtype": "NVMe", 00:13:15.138 "listen_addresses": [ 00:13:15.138 { 00:13:15.138 "trtype": "VFIOUSER", 00:13:15.138 "adrfam": "IPv4", 00:13:15.138 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.138 "trsvcid": "0" 00:13:15.138 } 00:13:15.138 ], 00:13:15.138 "allow_any_host": true, 00:13:15.138 "hosts": [], 00:13:15.138 "serial_number": "SPDK1", 00:13:15.138 "model_number": "SPDK bdev Controller", 00:13:15.138 "max_namespaces": 32, 00:13:15.138 "min_cntlid": 1, 00:13:15.138 "max_cntlid": 65519, 00:13:15.138 "namespaces": [ 00:13:15.138 { 00:13:15.138 "nsid": 1, 00:13:15.138 "bdev_name": "Malloc1", 00:13:15.138 "name": "Malloc1", 00:13:15.138 "nguid": "11A09C8A03DC4A0B8714530861AF0A51", 00:13:15.138 "uuid": "11a09c8a-03dc-4a0b-8714-530861af0a51" 00:13:15.138 }, 00:13:15.138 { 00:13:15.138 "nsid": 2, 00:13:15.138 "bdev_name": "Malloc3", 00:13:15.138 "name": "Malloc3", 00:13:15.138 "nguid": "C3004BBCA4B84E33A5E1CE34E0420A99", 00:13:15.138 "uuid": "c3004bbc-a4b8-4e33-a5e1-ce34e0420a99" 00:13:15.138 } 00:13:15.138 ] 00:13:15.138 }, 00:13:15.138 { 00:13:15.138 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.138 "subtype": "NVMe", 00:13:15.138 "listen_addresses": [ 00:13:15.138 { 00:13:15.138 "trtype": "VFIOUSER", 00:13:15.138 "adrfam": "IPv4", 00:13:15.138 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.138 "trsvcid": "0" 00:13:15.138 } 00:13:15.138 ], 00:13:15.138 "allow_any_host": true, 00:13:15.138 "hosts": [], 00:13:15.138 "serial_number": "SPDK2", 00:13:15.138 "model_number": "SPDK bdev Controller", 00:13:15.138 "max_namespaces": 32, 00:13:15.138 "min_cntlid": 1, 00:13:15.138 "max_cntlid": 65519, 00:13:15.138 "namespaces": [ 00:13:15.138 { 00:13:15.138 "nsid": 1, 00:13:15.138 "bdev_name": "Malloc2", 00:13:15.138 "name": "Malloc2", 00:13:15.138 "nguid": "FA6D819008454D23B5C490F81F4D4C94", 00:13:15.138 "uuid": "fa6d8190-0845-4d23-b5c4-90f81f4d4c94" 00:13:15.138 } 00:13:15.138 ] 00:13:15.138 } 00:13:15.138 ] 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2144034 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:15.138 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:15.138 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.398 [2024-06-10 12:02:04.735879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.398 Malloc4 00:13:15.398 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:15.657 [2024-06-10 12:02:04.930413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.657 12:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.657 Asynchronous Event Request test 00:13:15.657 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.657 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.657 Registering asynchronous event callbacks... 00:13:15.657 Starting namespace attribute notice tests for all controllers... 00:13:15.657 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:15.657 aer_cb - Changed Namespace 00:13:15.657 Cleaning up... 00:13:15.657 [ 00:13:15.657 { 00:13:15.657 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.657 "subtype": "Discovery", 00:13:15.657 "listen_addresses": [], 00:13:15.657 "allow_any_host": true, 00:13:15.657 "hosts": [] 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.657 "subtype": "NVMe", 00:13:15.657 "listen_addresses": [ 00:13:15.657 { 00:13:15.657 "trtype": "VFIOUSER", 00:13:15.657 "adrfam": "IPv4", 00:13:15.657 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.657 "trsvcid": "0" 00:13:15.657 } 00:13:15.657 ], 00:13:15.657 "allow_any_host": true, 00:13:15.657 "hosts": [], 00:13:15.657 "serial_number": "SPDK1", 00:13:15.657 "model_number": "SPDK bdev Controller", 00:13:15.657 "max_namespaces": 32, 00:13:15.657 "min_cntlid": 1, 00:13:15.657 "max_cntlid": 65519, 00:13:15.657 "namespaces": [ 00:13:15.657 { 00:13:15.657 "nsid": 1, 00:13:15.657 "bdev_name": "Malloc1", 00:13:15.657 "name": "Malloc1", 00:13:15.657 "nguid": "11A09C8A03DC4A0B8714530861AF0A51", 00:13:15.657 "uuid": "11a09c8a-03dc-4a0b-8714-530861af0a51" 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "nsid": 2, 00:13:15.657 "bdev_name": "Malloc3", 00:13:15.657 "name": "Malloc3", 00:13:15.657 "nguid": "C3004BBCA4B84E33A5E1CE34E0420A99", 00:13:15.657 "uuid": "c3004bbc-a4b8-4e33-a5e1-ce34e0420a99" 00:13:15.657 } 00:13:15.657 ] 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.657 "subtype": "NVMe", 00:13:15.657 "listen_addresses": [ 00:13:15.657 { 00:13:15.657 "trtype": "VFIOUSER", 00:13:15.657 "adrfam": "IPv4", 00:13:15.657 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.657 "trsvcid": "0" 00:13:15.657 } 00:13:15.657 ], 00:13:15.657 "allow_any_host": true, 00:13:15.657 "hosts": [], 00:13:15.657 "serial_number": "SPDK2", 00:13:15.657 "model_number": "SPDK bdev Controller", 00:13:15.657 "max_namespaces": 32, 00:13:15.657 "min_cntlid": 1, 00:13:15.657 "max_cntlid": 65519, 00:13:15.657 "namespaces": [ 00:13:15.657 { 00:13:15.657 "nsid": 1, 00:13:15.657 "bdev_name": "Malloc2", 00:13:15.657 "name": "Malloc2", 00:13:15.657 "nguid": "FA6D819008454D23B5C490F81F4D4C94", 00:13:15.657 "uuid": "fa6d8190-0845-4d23-b5c4-90f81f4d4c94" 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "nsid": 2, 00:13:15.657 "bdev_name": "Malloc4", 00:13:15.657 "name": "Malloc4", 00:13:15.657 "nguid": "EB714A526AA64AF4A3CBDCC4116BF102", 00:13:15.657 "uuid": "eb714a52-6aa6-4af4-a3cb-dcc4116bf102" 00:13:15.657 } 00:13:15.657 ] 00:13:15.657 } 00:13:15.657 ] 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2144034 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2136030 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2136030 ']' 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2136030 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:15.657 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2136030 00:13:15.917 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:15.917 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:15.917 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2136030' 00:13:15.917 killing process with pid 2136030 00:13:15.917 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2136030 00:13:15.917 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2136030 00:13:15.917 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2144077 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2144077' 00:13:16.177 Process pid: 2144077 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2144077 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2144077 ']' 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:16.177 12:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:16.177 [2024-06-10 12:02:05.484293] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:16.177 [2024-06-10 12:02:05.485208] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:13:16.177 [2024-06-10 12:02:05.485248] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.177 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.177 [2024-06-10 12:02:05.553470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.177 [2024-06-10 12:02:05.621711] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.177 [2024-06-10 12:02:05.621758] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.177 [2024-06-10 12:02:05.621768] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.177 [2024-06-10 12:02:05.621776] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.177 [2024-06-10 12:02:05.621800] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.177 [2024-06-10 12:02:05.621849] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.177 [2024-06-10 12:02:05.621942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.177 [2024-06-10 12:02:05.622031] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.177 [2024-06-10 12:02:05.622033] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.436 [2024-06-10 12:02:05.697345] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:16.436 [2024-06-10 12:02:05.697419] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:16.436 [2024-06-10 12:02:05.697637] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:16.436 [2024-06-10 12:02:05.697930] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:16.436 [2024-06-10 12:02:05.698131] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:17.003 12:02:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:17.003 12:02:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:17.003 12:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:17.940 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:18.199 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:18.199 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:18.199 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.199 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:18.199 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:18.199 Malloc1 00:13:18.199 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:18.458 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:18.717 12:02:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:18.718 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.718 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:18.718 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:18.977 Malloc2 00:13:18.977 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:19.236 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:19.236 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2144077 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2144077 ']' 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2144077 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2144077 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2144077' 00:13:19.495 killing process with pid 2144077 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2144077 00:13:19.495 12:02:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2144077 00:13:19.755 12:02:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:19.755 12:02:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:19.755 00:13:19.755 real 0m51.413s 00:13:19.755 user 3m22.435s 00:13:19.755 sys 0m4.702s 00:13:19.755 12:02:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:19.755 12:02:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:19.755 ************************************ 00:13:19.755 END TEST nvmf_vfio_user 00:13:19.755 ************************************ 00:13:19.755 12:02:09 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:19.755 12:02:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:19.755 12:02:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:19.755 12:02:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.755 ************************************ 00:13:19.755 START TEST nvmf_vfio_user_nvme_compliance 00:13:19.755 ************************************ 00:13:19.755 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:20.015 * Looking for test storage... 00:13:20.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.015 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2144937 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2144937' 00:13:20.016 Process pid: 2144937 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2144937 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 2144937 ']' 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:20.016 12:02:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.016 [2024-06-10 12:02:09.451828] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:13:20.016 [2024-06-10 12:02:09.451877] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.016 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.016 [2024-06-10 12:02:09.519028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.275 [2024-06-10 12:02:09.589193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.275 [2024-06-10 12:02:09.589235] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.275 [2024-06-10 12:02:09.589245] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.275 [2024-06-10 12:02:09.589253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.275 [2024-06-10 12:02:09.589260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.275 [2024-06-10 12:02:09.589311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.275 [2024-06-10 12:02:09.589408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.275 [2024-06-10 12:02:09.589408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.843 12:02:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:20.843 12:02:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:13:20.843 12:02:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.779 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.038 malloc0 00:13:22.038 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.038 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.039 12:02:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:22.039 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.039 00:13:22.039 00:13:22.039 CUnit - A unit testing framework for C - Version 2.1-3 00:13:22.039 http://cunit.sourceforge.net/ 00:13:22.039 00:13:22.039 00:13:22.039 Suite: nvme_compliance 00:13:22.039 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 12:02:11.505923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.039 [2024-06-10 12:02:11.507249] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:22.039 [2024-06-10 12:02:11.507265] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:22.039 [2024-06-10 12:02:11.507273] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:22.039 [2024-06-10 12:02:11.510952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.039 passed 00:13:22.298 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 12:02:11.584532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.298 [2024-06-10 12:02:11.587540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.298 passed 00:13:22.298 Test: admin_identify_ns ...[2024-06-10 12:02:11.666330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.298 [2024-06-10 12:02:11.725485] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:22.298 [2024-06-10 12:02:11.733485] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:22.298 [2024-06-10 12:02:11.754581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.298 passed 00:13:22.557 Test: admin_get_features_mandatory_features ...[2024-06-10 12:02:11.829911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.557 [2024-06-10 12:02:11.832931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.557 passed 00:13:22.557 Test: admin_get_features_optional_features ...[2024-06-10 12:02:11.909392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.557 [2024-06-10 12:02:11.912412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.557 passed 00:13:22.557 Test: admin_set_features_number_of_queues ...[2024-06-10 12:02:11.988966] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.817 [2024-06-10 12:02:12.094594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.817 passed 00:13:22.817 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 12:02:12.167035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.817 [2024-06-10 12:02:12.170058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.817 passed 00:13:22.817 Test: admin_get_log_page_with_lpo ...[2024-06-10 12:02:12.247606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.817 [2024-06-10 12:02:12.317491] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:22.817 [2024-06-10 12:02:12.330571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.076 passed 00:13:23.076 Test: fabric_property_get ...[2024-06-10 12:02:12.403151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.076 [2024-06-10 12:02:12.404369] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:23.076 [2024-06-10 12:02:12.406165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.076 passed 00:13:23.076 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 12:02:12.483656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.076 [2024-06-10 12:02:12.484877] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:23.076 [2024-06-10 12:02:12.486675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.076 passed 00:13:23.076 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 12:02:12.563228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.336 [2024-06-10 12:02:12.647486] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.336 [2024-06-10 12:02:12.663486] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.336 [2024-06-10 12:02:12.668570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.336 passed 00:13:23.336 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 12:02:12.741980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.336 [2024-06-10 12:02:12.743200] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:23.336 [2024-06-10 12:02:12.745002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.336 passed 00:13:23.336 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 12:02:12.820555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.596 [2024-06-10 12:02:12.897487] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:23.596 [2024-06-10 12:02:12.924493] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.596 [2024-06-10 12:02:12.929566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.596 passed 00:13:23.596 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 12:02:13.002026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.596 [2024-06-10 12:02:13.003253] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:23.596 [2024-06-10 12:02:13.003280] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:23.596 [2024-06-10 12:02:13.005045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.596 passed 00:13:23.596 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 12:02:13.081579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.855 [2024-06-10 12:02:13.174486] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:23.855 [2024-06-10 12:02:13.182486] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:23.855 [2024-06-10 12:02:13.190488] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:23.855 [2024-06-10 12:02:13.198487] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:23.855 [2024-06-10 12:02:13.227585] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.855 passed 00:13:23.855 Test: admin_create_io_sq_verify_pc ...[2024-06-10 12:02:13.302846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.855 [2024-06-10 12:02:13.321493] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:23.855 [2024-06-10 12:02:13.339029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.855 passed 00:13:24.113 Test: admin_create_io_qp_max_qps ...[2024-06-10 12:02:13.413548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.050 [2024-06-10 12:02:14.508486] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:25.618 [2024-06-10 12:02:14.882303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.618 passed 00:13:25.618 Test: admin_create_io_sq_shared_cq ...[2024-06-10 12:02:14.956962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.618 [2024-06-10 12:02:15.089495] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:25.618 [2024-06-10 12:02:15.126551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.877 passed 00:13:25.877 00:13:25.877 Run Summary: Type Total Ran Passed Failed Inactive 00:13:25.877 suites 1 1 n/a 0 0 00:13:25.877 tests 18 18 18 0 0 00:13:25.877 asserts 360 360 360 0 n/a 00:13:25.877 00:13:25.877 Elapsed time = 1.486 seconds 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2144937 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 2144937 ']' 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 2144937 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2144937 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2144937' 00:13:25.877 killing process with pid 2144937 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 2144937 00:13:25.877 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 2144937 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:26.137 00:13:26.137 real 0m6.156s 00:13:26.137 user 0m17.263s 00:13:26.137 sys 0m0.734s 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 ************************************ 00:13:26.137 END TEST nvmf_vfio_user_nvme_compliance 00:13:26.137 ************************************ 00:13:26.137 12:02:15 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:26.137 12:02:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:26.137 12:02:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:26.137 12:02:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 ************************************ 00:13:26.137 START TEST nvmf_vfio_user_fuzz 00:13:26.137 ************************************ 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:26.137 * Looking for test storage... 00:13:26.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:26.137 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2146060 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2146060' 00:13:26.138 Process pid: 2146060 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2146060 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2146060 ']' 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:26.138 12:02:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:27.166 12:02:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:27.166 12:02:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:13:27.166 12:02:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:28.103 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:28.103 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.104 malloc0 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:28.104 12:02:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:00.184 Fuzzing completed. Shutting down the fuzz application 00:14:00.184 00:14:00.184 Dumping successful admin opcodes: 00:14:00.184 8, 9, 10, 24, 00:14:00.184 Dumping successful io opcodes: 00:14:00.184 0, 00:14:00.184 NS: 0x200003a1ef00 I/O qp, Total commands completed: 943699, total successful commands: 3688, random_seed: 2976622848 00:14:00.184 NS: 0x200003a1ef00 admin qp, Total commands completed: 229337, total successful commands: 1838, random_seed: 1325773824 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2146060 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2146060 ']' 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 2146060 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:00.184 12:02:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2146060 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2146060' 00:14:00.184 killing process with pid 2146060 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 2146060 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 2146060 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:00.184 00:14:00.184 real 0m32.795s 00:14:00.184 user 0m32.223s 00:14:00.184 sys 0m29.039s 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:00.184 12:02:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 ************************************ 00:14:00.184 END TEST nvmf_vfio_user_fuzz 00:14:00.184 ************************************ 00:14:00.184 12:02:48 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:00.184 12:02:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:00.184 12:02:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:00.184 12:02:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.184 ************************************ 00:14:00.184 START TEST nvmf_host_management 00:14:00.184 ************************************ 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:00.184 * Looking for test storage... 00:14:00.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:00.184 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.185 12:02:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:06.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:06.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:06.758 Found net devices under 0000:af:00.0: cvl_0_0 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:06.758 Found net devices under 0000:af:00.1: cvl_0_1 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:06.758 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:14:06.759 00:14:06.759 --- 10.0.0.2 ping statistics --- 00:14:06.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.759 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:14:06.759 00:14:06.759 --- 10.0.0.1 ping statistics --- 00:14:06.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.759 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2154778 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2154778 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2154778 ']' 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:06.759 12:02:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:06.759 [2024-06-10 12:02:55.421963] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:06.759 [2024-06-10 12:02:55.422015] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.759 [2024-06-10 12:02:55.498046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.759 [2024-06-10 12:02:55.573404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.759 [2024-06-10 12:02:55.573442] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.759 [2024-06-10 12:02:55.573451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.759 [2024-06-10 12:02:55.573460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.759 [2024-06-10 12:02:55.573467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.759 [2024-06-10 12:02:55.573624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.759 [2024-06-10 12:02:55.573688] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.759 [2024-06-10 12:02:55.573815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.759 [2024-06-10 12:02:55.573816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.759 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:06.759 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:06.759 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.759 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:06.759 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.018 [2024-06-10 12:02:56.287182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.018 Malloc0 00:14:07.018 [2024-06-10 12:02:56.353632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2155080 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2155080 /var/tmp/bdevperf.sock 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2155080 ']' 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:07.018 { 00:14:07.018 "params": { 00:14:07.018 "name": "Nvme$subsystem", 00:14:07.018 "trtype": "$TEST_TRANSPORT", 00:14:07.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.018 "adrfam": "ipv4", 00:14:07.018 "trsvcid": "$NVMF_PORT", 00:14:07.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.018 "hdgst": ${hdgst:-false}, 00:14:07.018 "ddgst": ${ddgst:-false} 00:14:07.018 }, 00:14:07.018 "method": "bdev_nvme_attach_controller" 00:14:07.018 } 00:14:07.018 EOF 00:14:07.018 )") 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:07.018 12:02:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:07.018 "params": { 00:14:07.018 "name": "Nvme0", 00:14:07.018 "trtype": "tcp", 00:14:07.018 "traddr": "10.0.0.2", 00:14:07.018 "adrfam": "ipv4", 00:14:07.018 "trsvcid": "4420", 00:14:07.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:07.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:07.018 "hdgst": false, 00:14:07.018 "ddgst": false 00:14:07.018 }, 00:14:07.018 "method": "bdev_nvme_attach_controller" 00:14:07.018 }' 00:14:07.018 [2024-06-10 12:02:56.459649] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:07.018 [2024-06-10 12:02:56.459700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155080 ] 00:14:07.018 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.018 [2024-06-10 12:02:56.531016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.276 [2024-06-10 12:02:56.600296] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.535 Running I/O for 10 seconds... 00:14:07.793 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:07.793 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:07.793 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.794 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.053 [2024-06-10 12:02:57.340826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46650 is same with the state(5) to be set 00:14:08.053 [2024-06-10 12:02:57.340872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46650 is same with the state(5) to be set 00:14:08.053 [2024-06-10 12:02:57.340882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46650 is same with the state(5) to be set 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.053 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.053 [2024-06-10 12:02:57.349951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.053 [2024-06-10 12:02:57.349985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.053 [2024-06-10 12:02:57.349997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.053 [2024-06-10 12:02:57.350011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.053 [2024-06-10 12:02:57.350021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.054 [2024-06-10 12:02:57.350031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.054 [2024-06-10 12:02:57.350051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d3820 is same with the state(5) to be set 00:14:08.054 [2024-06-10 12:02:57.350099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.054 [2024-06-10 12:02:57.350855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.054 [2024-06-10 12:02:57.350864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.350884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.350904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.350924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.350944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.350964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.350984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.350995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.055 [2024-06-10 12:02:57.351409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.055 [2024-06-10 12:02:57.351472] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e4860 was disconnected and freed. reset controller. 00:14:08.055 [2024-06-10 12:02:57.352347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:08.055 task offset: 106496 on job bdev=Nvme0n1 fails 00:14:08.055 00:14:08.055 Latency(us) 00:14:08.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.055 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:08.055 Job: Nvme0n1 ended in about 0.43 seconds with error 00:14:08.055 Verification LBA range: start 0x0 length 0x400 00:14:08.055 Nvme0n1 : 0.43 1915.61 119.73 147.35 0.00 30268.27 1966.08 26319.26 00:14:08.055 =================================================================================================================== 00:14:08.055 Total : 1915.61 119.73 147.35 0.00 30268.27 1966.08 26319.26 00:14:08.055 [2024-06-10 12:02:57.353863] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:08.055 [2024-06-10 12:02:57.353880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d3820 (9): Bad file descriptor 00:14:08.055 12:02:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.055 12:02:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:08.055 [2024-06-10 12:02:57.364565] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2155080 00:14:08.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2155080) - No such process 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:08.991 { 00:14:08.991 "params": { 00:14:08.991 "name": "Nvme$subsystem", 00:14:08.991 "trtype": "$TEST_TRANSPORT", 00:14:08.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.991 "adrfam": "ipv4", 00:14:08.991 "trsvcid": "$NVMF_PORT", 00:14:08.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.991 "hdgst": ${hdgst:-false}, 00:14:08.991 "ddgst": ${ddgst:-false} 00:14:08.991 }, 00:14:08.991 "method": "bdev_nvme_attach_controller" 00:14:08.991 } 00:14:08.991 EOF 00:14:08.991 )") 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:08.991 12:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:08.991 "params": { 00:14:08.991 "name": "Nvme0", 00:14:08.991 "trtype": "tcp", 00:14:08.991 "traddr": "10.0.0.2", 00:14:08.991 "adrfam": "ipv4", 00:14:08.991 "trsvcid": "4420", 00:14:08.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:08.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:08.991 "hdgst": false, 00:14:08.991 "ddgst": false 00:14:08.991 }, 00:14:08.991 "method": "bdev_nvme_attach_controller" 00:14:08.991 }' 00:14:08.991 [2024-06-10 12:02:58.414070] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:08.991 [2024-06-10 12:02:58.414124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155368 ] 00:14:08.991 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.991 [2024-06-10 12:02:58.483928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.249 [2024-06-10 12:02:58.550749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.508 Running I/O for 1 seconds... 00:14:10.445 00:14:10.445 Latency(us) 00:14:10.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.445 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:10.445 Verification LBA range: start 0x0 length 0x400 00:14:10.445 Nvme0n1 : 1.02 1998.41 124.90 0.00 0.00 31552.37 7864.32 26528.97 00:14:10.445 =================================================================================================================== 00:14:10.445 Total : 1998.41 124.90 0.00 0.00 31552.37 7864.32 26528.97 00:14:10.704 12:02:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:10.704 12:02:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:10.704 12:02:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:10.704 12:02:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.704 rmmod nvme_tcp 00:14:10.704 rmmod nvme_fabrics 00:14:10.704 rmmod nvme_keyring 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2154778 ']' 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2154778 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 2154778 ']' 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 2154778 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2154778 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2154778' 00:14:10.704 killing process with pid 2154778 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 2154778 00:14:10.704 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 2154778 00:14:10.963 [2024-06-10 12:03:00.331111] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.963 12:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.496 12:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.496 12:03:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:13.496 00:14:13.496 real 0m14.072s 00:14:13.496 user 0m23.297s 00:14:13.496 sys 0m6.601s 00:14:13.496 12:03:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:13.496 12:03:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.496 ************************************ 00:14:13.496 END TEST nvmf_host_management 00:14:13.496 ************************************ 00:14:13.496 12:03:02 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:13.496 12:03:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:13.496 12:03:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:13.496 12:03:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.496 ************************************ 00:14:13.496 START TEST nvmf_lvol 00:14:13.496 ************************************ 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:13.496 * Looking for test storage... 00:14:13.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.496 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.497 12:03:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:20.065 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:20.065 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.065 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:20.066 Found net devices under 0000:af:00.0: cvl_0_0 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:20.066 Found net devices under 0000:af:00.1: cvl_0_1 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.066 12:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:14:20.066 00:14:20.066 --- 10.0.0.2 ping statistics --- 00:14:20.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.066 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:20.066 00:14:20.066 --- 10.0.0.1 ping statistics --- 00:14:20.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.066 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2159885 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2159885 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 2159885 ']' 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:20.066 12:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:20.066 [2024-06-10 12:03:09.319012] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:20.066 [2024-06-10 12:03:09.319067] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.066 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.066 [2024-06-10 12:03:09.394273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:20.066 [2024-06-10 12:03:09.468306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.066 [2024-06-10 12:03:09.468343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.066 [2024-06-10 12:03:09.468353] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.066 [2024-06-10 12:03:09.468362] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.066 [2024-06-10 12:03:09.468386] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.066 [2024-06-10 12:03:09.468431] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.066 [2024-06-10 12:03:09.468515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.066 [2024-06-10 12:03:09.468518] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.635 12:03:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:20.635 12:03:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:14:20.635 12:03:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.635 12:03:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:20.635 12:03:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:20.894 12:03:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.894 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.894 [2024-06-10 12:03:10.325293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.894 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.153 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:21.153 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.412 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:21.412 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:21.412 12:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:21.672 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3e69aa36-93cf-4133-a345-f898b49dd38a 00:14:21.672 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e69aa36-93cf-4133-a345-f898b49dd38a lvol 20 00:14:21.930 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e6d9eed7-3bf9-4c1d-a156-675d14227d28 00:14:21.930 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:22.189 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6d9eed7-3bf9-4c1d-a156-675d14227d28 00:14:22.189 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.449 [2024-06-10 12:03:11.797194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.449 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.710 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2160451 00:14:22.710 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:22.710 12:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:22.710 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.676 12:03:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e6d9eed7-3bf9-4c1d-a156-675d14227d28 MY_SNAPSHOT 00:14:23.935 12:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=efc372fc-a4dd-4134-a432-9e965a1480d8 00:14:23.935 12:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e6d9eed7-3bf9-4c1d-a156-675d14227d28 30 00:14:24.194 12:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone efc372fc-a4dd-4134-a432-9e965a1480d8 MY_CLONE 00:14:24.194 12:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3f1c8e58-7c11-4ab7-b61b-0abbc76ece59 00:14:24.194 12:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3f1c8e58-7c11-4ab7-b61b-0abbc76ece59 00:14:24.762 12:03:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2160451 00:14:34.741 Initializing NVMe Controllers 00:14:34.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:34.741 Controller IO queue size 128, less than required. 00:14:34.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:34.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:34.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:34.741 Initialization complete. Launching workers. 00:14:34.741 ======================================================== 00:14:34.741 Latency(us) 00:14:34.741 Device Information : IOPS MiB/s Average min max 00:14:34.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12438.70 48.59 10296.73 408.42 59017.10 00:14:34.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12271.40 47.94 10432.76 3235.87 55969.00 00:14:34.741 ======================================================== 00:14:34.741 Total : 24710.10 96.52 10364.29 408.42 59017.10 00:14:34.741 00:14:34.741 12:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:34.741 12:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6d9eed7-3bf9-4c1d-a156-675d14227d28 00:14:34.741 12:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e69aa36-93cf-4133-a345-f898b49dd38a 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.741 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.741 rmmod nvme_tcp 00:14:34.741 rmmod nvme_fabrics 00:14:34.742 rmmod nvme_keyring 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2159885 ']' 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2159885 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 2159885 ']' 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 2159885 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2159885 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2159885' 00:14:34.742 killing process with pid 2159885 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 2159885 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 2159885 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.742 12:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.120 12:03:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:36.120 00:14:36.120 real 0m22.920s 00:14:36.120 user 1m2.556s 00:14:36.120 sys 0m9.724s 00:14:36.120 12:03:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:36.120 12:03:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:36.120 ************************************ 00:14:36.120 END TEST nvmf_lvol 00:14:36.120 ************************************ 00:14:36.120 12:03:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:36.120 12:03:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:36.120 12:03:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:36.120 12:03:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.120 ************************************ 00:14:36.120 START TEST nvmf_lvs_grow 00:14:36.120 ************************************ 00:14:36.120 12:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:36.120 * Looking for test storage... 00:14:36.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:36.379 12:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.948 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:42.949 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:42.949 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:42.949 Found net devices under 0000:af:00.0: cvl_0_0 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:42.949 Found net devices under 0000:af:00.1: cvl_0_1 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.949 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:14:43.207 00:14:43.207 --- 10.0.0.2 ping statistics --- 00:14:43.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.207 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:14:43.207 00:14:43.207 --- 10.0.0.1 ping statistics --- 00:14:43.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.207 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2166018 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2166018 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 2166018 ']' 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:43.207 12:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:43.207 [2024-06-10 12:03:32.599039] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:43.207 [2024-06-10 12:03:32.599086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.207 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.207 [2024-06-10 12:03:32.674506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.466 [2024-06-10 12:03:32.747307] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.466 [2024-06-10 12:03:32.747343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.466 [2024-06-10 12:03:32.747352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.466 [2024-06-10 12:03:32.747360] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.466 [2024-06-10 12:03:32.747384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.466 [2024-06-10 12:03:32.747405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.033 12:03:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:44.291 [2024-06-10 12:03:33.574422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:44.291 ************************************ 00:14:44.291 START TEST lvs_grow_clean 00:14:44.291 ************************************ 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:44.291 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:44.292 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:44.292 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:44.292 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:44.292 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.292 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.292 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.550 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:44.550 12:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:44.550 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:44.550 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:44.550 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:44.808 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:44.808 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:44.808 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eda90734-4ecb-4560-b242-fbdb432ce2a8 lvol 150 00:14:45.067 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=287dec04-fbd5-43fc-8596-882573a72dff 00:14:45.067 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.067 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:45.067 [2024-06-10 12:03:34.513644] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:45.067 [2024-06-10 12:03:34.513692] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:45.067 true 00:14:45.067 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:45.067 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:45.325 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:45.325 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:45.584 12:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 287dec04-fbd5-43fc-8596-882573a72dff 00:14:45.584 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.853 [2024-06-10 12:03:35.167611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2166537 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2166537 /var/tmp/bdevperf.sock 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 2166537 ']' 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:45.853 12:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:46.111 [2024-06-10 12:03:35.396279] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:14:46.111 [2024-06-10 12:03:35.396334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166537 ] 00:14:46.111 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.111 [2024-06-10 12:03:35.467237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.111 [2024-06-10 12:03:35.542362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.677 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:46.677 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:14:46.677 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:47.243 Nvme0n1 00:14:47.243 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:47.243 [ 00:14:47.243 { 00:14:47.243 "name": "Nvme0n1", 00:14:47.243 "aliases": [ 00:14:47.243 "287dec04-fbd5-43fc-8596-882573a72dff" 00:14:47.243 ], 00:14:47.243 "product_name": "NVMe disk", 00:14:47.243 "block_size": 4096, 00:14:47.243 "num_blocks": 38912, 00:14:47.243 "uuid": "287dec04-fbd5-43fc-8596-882573a72dff", 00:14:47.243 "assigned_rate_limits": { 00:14:47.243 "rw_ios_per_sec": 0, 00:14:47.243 "rw_mbytes_per_sec": 0, 00:14:47.243 "r_mbytes_per_sec": 0, 00:14:47.243 "w_mbytes_per_sec": 0 00:14:47.243 }, 00:14:47.243 "claimed": false, 00:14:47.243 "zoned": false, 00:14:47.243 "supported_io_types": { 00:14:47.243 "read": true, 00:14:47.243 "write": true, 00:14:47.243 "unmap": true, 00:14:47.243 "write_zeroes": true, 00:14:47.243 "flush": true, 00:14:47.243 "reset": true, 00:14:47.243 "compare": true, 00:14:47.243 "compare_and_write": true, 00:14:47.243 "abort": true, 00:14:47.243 "nvme_admin": true, 00:14:47.243 "nvme_io": true 00:14:47.243 }, 00:14:47.243 "memory_domains": [ 00:14:47.243 { 00:14:47.243 "dma_device_id": "system", 00:14:47.243 "dma_device_type": 1 00:14:47.243 } 00:14:47.243 ], 00:14:47.243 "driver_specific": { 00:14:47.243 "nvme": [ 00:14:47.243 { 00:14:47.243 "trid": { 00:14:47.243 "trtype": "TCP", 00:14:47.243 "adrfam": "IPv4", 00:14:47.243 "traddr": "10.0.0.2", 00:14:47.243 "trsvcid": "4420", 00:14:47.243 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:47.243 }, 00:14:47.243 "ctrlr_data": { 00:14:47.243 "cntlid": 1, 00:14:47.243 "vendor_id": "0x8086", 00:14:47.243 "model_number": "SPDK bdev Controller", 00:14:47.243 "serial_number": "SPDK0", 00:14:47.243 "firmware_revision": "24.09", 00:14:47.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:47.243 "oacs": { 00:14:47.243 "security": 0, 00:14:47.243 "format": 0, 00:14:47.243 "firmware": 0, 00:14:47.243 "ns_manage": 0 00:14:47.243 }, 00:14:47.243 "multi_ctrlr": true, 00:14:47.243 "ana_reporting": false 00:14:47.243 }, 00:14:47.243 "vs": { 00:14:47.243 "nvme_version": "1.3" 00:14:47.243 }, 00:14:47.243 "ns_data": { 00:14:47.243 "id": 1, 00:14:47.243 "can_share": true 00:14:47.243 } 00:14:47.243 } 00:14:47.243 ], 00:14:47.243 "mp_policy": "active_passive" 00:14:47.243 } 00:14:47.243 } 00:14:47.243 ] 00:14:47.243 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2166715 00:14:47.243 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:47.243 12:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.502 Running I/O for 10 seconds... 00:14:48.438 Latency(us) 00:14:48.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.438 Nvme0n1 : 1.00 24063.00 94.00 0.00 0.00 0.00 0.00 0.00 00:14:48.438 =================================================================================================================== 00:14:48.438 Total : 24063.00 94.00 0.00 0.00 0.00 0.00 0.00 00:14:48.438 00:14:49.375 12:03:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:49.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.375 Nvme0n1 : 2.00 24236.50 94.67 0.00 0.00 0.00 0.00 0.00 00:14:49.375 =================================================================================================================== 00:14:49.375 Total : 24236.50 94.67 0.00 0.00 0.00 0.00 0.00 00:14:49.375 00:14:49.375 true 00:14:49.634 12:03:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:49.634 12:03:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:49.634 12:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:49.634 12:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:49.634 12:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2166715 00:14:50.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.570 Nvme0n1 : 3.00 24307.00 94.95 0.00 0.00 0.00 0.00 0.00 00:14:50.570 =================================================================================================================== 00:14:50.570 Total : 24307.00 94.95 0.00 0.00 0.00 0.00 0.00 00:14:50.570 00:14:51.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.505 Nvme0n1 : 4.00 24352.50 95.13 0.00 0.00 0.00 0.00 0.00 00:14:51.505 =================================================================================================================== 00:14:51.505 Total : 24352.50 95.13 0.00 0.00 0.00 0.00 0.00 00:14:51.505 00:14:52.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.440 Nvme0n1 : 5.00 24426.40 95.42 0.00 0.00 0.00 0.00 0.00 00:14:52.440 =================================================================================================================== 00:14:52.440 Total : 24426.40 95.42 0.00 0.00 0.00 0.00 0.00 00:14:52.440 00:14:53.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.376 Nvme0n1 : 6.00 24438.00 95.46 0.00 0.00 0.00 0.00 0.00 00:14:53.376 =================================================================================================================== 00:14:53.376 Total : 24438.00 95.46 0.00 0.00 0.00 0.00 0.00 00:14:53.376 00:14:54.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.312 Nvme0n1 : 7.00 24473.43 95.60 0.00 0.00 0.00 0.00 0.00 00:14:54.312 =================================================================================================================== 00:14:54.312 Total : 24473.43 95.60 0.00 0.00 0.00 0.00 0.00 00:14:54.312 00:14:55.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.687 Nvme0n1 : 8.00 24493.62 95.68 0.00 0.00 0.00 0.00 0.00 00:14:55.687 =================================================================================================================== 00:14:55.688 Total : 24493.62 95.68 0.00 0.00 0.00 0.00 0.00 00:14:55.688 00:14:56.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.625 Nvme0n1 : 9.00 24513.22 95.75 0.00 0.00 0.00 0.00 0.00 00:14:56.625 =================================================================================================================== 00:14:56.625 Total : 24513.22 95.75 0.00 0.00 0.00 0.00 0.00 00:14:56.625 00:14:57.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.563 Nvme0n1 : 10.00 24519.60 95.78 0.00 0.00 0.00 0.00 0.00 00:14:57.563 =================================================================================================================== 00:14:57.563 Total : 24519.60 95.78 0.00 0.00 0.00 0.00 0.00 00:14:57.563 00:14:57.563 00:14:57.563 Latency(us) 00:14:57.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.563 Nvme0n1 : 10.00 24521.01 95.79 0.00 0.00 5217.09 3080.19 10852.76 00:14:57.563 =================================================================================================================== 00:14:57.563 Total : 24521.01 95.79 0.00 0.00 5217.09 3080.19 10852.76 00:14:57.563 0 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2166537 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 2166537 ']' 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 2166537 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2166537 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2166537' 00:14:57.563 killing process with pid 2166537 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 2166537 00:14:57.563 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.563 00:14:57.563 Latency(us) 00:14:57.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.563 =================================================================================================================== 00:14:57.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.563 12:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 2166537 00:14:57.563 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.910 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:58.168 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:58.168 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:58.168 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:58.168 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:58.168 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:58.427 [2024-06-10 12:03:47.762748] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:58.427 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:58.686 request: 00:14:58.686 { 00:14:58.686 "uuid": "eda90734-4ecb-4560-b242-fbdb432ce2a8", 00:14:58.686 "method": "bdev_lvol_get_lvstores", 00:14:58.686 "req_id": 1 00:14:58.686 } 00:14:58.686 Got JSON-RPC error response 00:14:58.686 response: 00:14:58.686 { 00:14:58.686 "code": -19, 00:14:58.686 "message": "No such device" 00:14:58.686 } 00:14:58.686 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:14:58.686 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:58.686 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:58.686 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:58.686 12:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.686 aio_bdev 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 287dec04-fbd5-43fc-8596-882573a72dff 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=287dec04-fbd5-43fc-8596-882573a72dff 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:58.686 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:58.944 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 287dec04-fbd5-43fc-8596-882573a72dff -t 2000 00:14:58.944 [ 00:14:58.944 { 00:14:58.944 "name": "287dec04-fbd5-43fc-8596-882573a72dff", 00:14:58.944 "aliases": [ 00:14:58.944 "lvs/lvol" 00:14:58.944 ], 00:14:58.944 "product_name": "Logical Volume", 00:14:58.944 "block_size": 4096, 00:14:58.944 "num_blocks": 38912, 00:14:58.944 "uuid": "287dec04-fbd5-43fc-8596-882573a72dff", 00:14:58.944 "assigned_rate_limits": { 00:14:58.944 "rw_ios_per_sec": 0, 00:14:58.944 "rw_mbytes_per_sec": 0, 00:14:58.944 "r_mbytes_per_sec": 0, 00:14:58.944 "w_mbytes_per_sec": 0 00:14:58.944 }, 00:14:58.944 "claimed": false, 00:14:58.944 "zoned": false, 00:14:58.944 "supported_io_types": { 00:14:58.944 "read": true, 00:14:58.944 "write": true, 00:14:58.944 "unmap": true, 00:14:58.945 "write_zeroes": true, 00:14:58.945 "flush": false, 00:14:58.945 "reset": true, 00:14:58.945 "compare": false, 00:14:58.945 "compare_and_write": false, 00:14:58.945 "abort": false, 00:14:58.945 "nvme_admin": false, 00:14:58.945 "nvme_io": false 00:14:58.945 }, 00:14:58.945 "driver_specific": { 00:14:58.945 "lvol": { 00:14:58.945 "lvol_store_uuid": "eda90734-4ecb-4560-b242-fbdb432ce2a8", 00:14:58.945 "base_bdev": "aio_bdev", 00:14:58.945 "thin_provision": false, 00:14:58.945 "num_allocated_clusters": 38, 00:14:58.945 "snapshot": false, 00:14:58.945 "clone": false, 00:14:58.945 "esnap_clone": false 00:14:58.945 } 00:14:58.945 } 00:14:58.945 } 00:14:58.945 ] 00:14:58.945 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:14:58.945 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:58.945 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:59.203 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:59.203 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:59.203 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:59.462 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:59.462 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 287dec04-fbd5-43fc-8596-882573a72dff 00:14:59.462 12:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eda90734-4ecb-4560-b242-fbdb432ce2a8 00:14:59.721 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.980 00:14:59.980 real 0m15.761s 00:14:59.980 user 0m14.854s 00:14:59.980 sys 0m2.037s 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:59.980 ************************************ 00:14:59.980 END TEST lvs_grow_clean 00:14:59.980 ************************************ 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:59.980 ************************************ 00:14:59.980 START TEST lvs_grow_dirty 00:14:59.980 ************************************ 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.980 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:00.239 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:00.239 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:00.497 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:00.497 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:00.497 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:00.497 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:00.497 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:00.497 12:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 lvol 150 00:15:00.756 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c9caa833-053c-47ac-833d-a39eb075107f 00:15:00.756 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:00.756 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:01.016 [2024-06-10 12:03:50.308124] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:01.016 [2024-06-10 12:03:50.308172] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:01.016 true 00:15:01.016 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:01.016 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:01.016 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:01.016 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:01.275 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c9caa833-053c-47ac-833d-a39eb075107f 00:15:01.534 12:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:01.534 [2024-06-10 12:03:50.990162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.534 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2169321 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2169321 /var/tmp/bdevperf.sock 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2169321 ']' 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:01.793 12:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.793 [2024-06-10 12:03:51.219647] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:01.793 [2024-06-10 12:03:51.219697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169321 ] 00:15:01.793 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.793 [2024-06-10 12:03:51.288908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.052 [2024-06-10 12:03:51.364665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.620 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:02.620 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:02.620 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:02.879 Nvme0n1 00:15:02.879 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:03.138 [ 00:15:03.138 { 00:15:03.138 "name": "Nvme0n1", 00:15:03.138 "aliases": [ 00:15:03.138 "c9caa833-053c-47ac-833d-a39eb075107f" 00:15:03.138 ], 00:15:03.138 "product_name": "NVMe disk", 00:15:03.138 "block_size": 4096, 00:15:03.138 "num_blocks": 38912, 00:15:03.138 "uuid": "c9caa833-053c-47ac-833d-a39eb075107f", 00:15:03.138 "assigned_rate_limits": { 00:15:03.138 "rw_ios_per_sec": 0, 00:15:03.138 "rw_mbytes_per_sec": 0, 00:15:03.138 "r_mbytes_per_sec": 0, 00:15:03.138 "w_mbytes_per_sec": 0 00:15:03.138 }, 00:15:03.138 "claimed": false, 00:15:03.138 "zoned": false, 00:15:03.138 "supported_io_types": { 00:15:03.138 "read": true, 00:15:03.139 "write": true, 00:15:03.139 "unmap": true, 00:15:03.139 "write_zeroes": true, 00:15:03.139 "flush": true, 00:15:03.139 "reset": true, 00:15:03.139 "compare": true, 00:15:03.139 "compare_and_write": true, 00:15:03.139 "abort": true, 00:15:03.139 "nvme_admin": true, 00:15:03.139 "nvme_io": true 00:15:03.139 }, 00:15:03.139 "memory_domains": [ 00:15:03.139 { 00:15:03.139 "dma_device_id": "system", 00:15:03.139 "dma_device_type": 1 00:15:03.139 } 00:15:03.139 ], 00:15:03.139 "driver_specific": { 00:15:03.139 "nvme": [ 00:15:03.139 { 00:15:03.139 "trid": { 00:15:03.139 "trtype": "TCP", 00:15:03.139 "adrfam": "IPv4", 00:15:03.139 "traddr": "10.0.0.2", 00:15:03.139 "trsvcid": "4420", 00:15:03.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:03.139 }, 00:15:03.139 "ctrlr_data": { 00:15:03.139 "cntlid": 1, 00:15:03.139 "vendor_id": "0x8086", 00:15:03.139 "model_number": "SPDK bdev Controller", 00:15:03.139 "serial_number": "SPDK0", 00:15:03.139 "firmware_revision": "24.09", 00:15:03.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:03.139 "oacs": { 00:15:03.139 "security": 0, 00:15:03.139 "format": 0, 00:15:03.139 "firmware": 0, 00:15:03.139 "ns_manage": 0 00:15:03.139 }, 00:15:03.139 "multi_ctrlr": true, 00:15:03.139 "ana_reporting": false 00:15:03.139 }, 00:15:03.139 "vs": { 00:15:03.139 "nvme_version": "1.3" 00:15:03.139 }, 00:15:03.139 "ns_data": { 00:15:03.139 "id": 1, 00:15:03.139 "can_share": true 00:15:03.139 } 00:15:03.139 } 00:15:03.139 ], 00:15:03.139 "mp_policy": "active_passive" 00:15:03.139 } 00:15:03.139 } 00:15:03.139 ] 00:15:03.139 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.139 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2169459 00:15:03.139 12:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:03.139 Running I/O for 10 seconds... 00:15:04.075 Latency(us) 00:15:04.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.075 Nvme0n1 : 1.00 23119.00 90.31 0.00 0.00 0.00 0.00 0.00 00:15:04.075 =================================================================================================================== 00:15:04.075 Total : 23119.00 90.31 0.00 0.00 0.00 0.00 0.00 00:15:04.075 00:15:05.012 12:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:05.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.012 Nvme0n1 : 2.00 23231.50 90.75 0.00 0.00 0.00 0.00 0.00 00:15:05.012 =================================================================================================================== 00:15:05.012 Total : 23231.50 90.75 0.00 0.00 0.00 0.00 0.00 00:15:05.012 00:15:05.271 true 00:15:05.271 12:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:05.271 12:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:05.530 12:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:05.530 12:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:05.530 12:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2169459 00:15:06.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.104 Nvme0n1 : 3.00 23263.67 90.87 0.00 0.00 0.00 0.00 0.00 00:15:06.104 =================================================================================================================== 00:15:06.104 Total : 23263.67 90.87 0.00 0.00 0.00 0.00 0.00 00:15:06.104 00:15:07.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.039 Nvme0n1 : 4.00 23323.75 91.11 0.00 0.00 0.00 0.00 0.00 00:15:07.039 =================================================================================================================== 00:15:07.039 Total : 23323.75 91.11 0.00 0.00 0.00 0.00 0.00 00:15:07.039 00:15:08.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.414 Nvme0n1 : 5.00 23367.80 91.28 0.00 0.00 0.00 0.00 0.00 00:15:08.414 =================================================================================================================== 00:15:08.414 Total : 23367.80 91.28 0.00 0.00 0.00 0.00 0.00 00:15:08.415 00:15:09.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.348 Nvme0n1 : 6.00 23407.83 91.44 0.00 0.00 0.00 0.00 0.00 00:15:09.348 =================================================================================================================== 00:15:09.348 Total : 23407.83 91.44 0.00 0.00 0.00 0.00 0.00 00:15:09.348 00:15:10.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.283 Nvme0n1 : 7.00 23435.29 91.54 0.00 0.00 0.00 0.00 0.00 00:15:10.283 =================================================================================================================== 00:15:10.283 Total : 23435.29 91.54 0.00 0.00 0.00 0.00 0.00 00:15:10.283 00:15:11.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.220 Nvme0n1 : 8.00 23415.88 91.47 0.00 0.00 0.00 0.00 0.00 00:15:11.220 =================================================================================================================== 00:15:11.220 Total : 23415.88 91.47 0.00 0.00 0.00 0.00 0.00 00:15:11.220 00:15:12.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.156 Nvme0n1 : 9.00 23441.67 91.57 0.00 0.00 0.00 0.00 0.00 00:15:12.156 =================================================================================================================== 00:15:12.156 Total : 23441.67 91.57 0.00 0.00 0.00 0.00 0.00 00:15:12.156 00:15:13.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.093 Nvme0n1 : 10.00 23446.30 91.59 0.00 0.00 0.00 0.00 0.00 00:15:13.093 =================================================================================================================== 00:15:13.093 Total : 23446.30 91.59 0.00 0.00 0.00 0.00 0.00 00:15:13.093 00:15:13.093 00:15:13.093 Latency(us) 00:15:13.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.093 Nvme0n1 : 10.01 23445.98 91.59 0.00 0.00 5455.41 3670.02 9175.04 00:15:13.093 =================================================================================================================== 00:15:13.093 Total : 23445.98 91.59 0.00 0.00 5455.41 3670.02 9175.04 00:15:13.093 0 00:15:13.093 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2169321 00:15:13.093 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 2169321 ']' 00:15:13.093 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 2169321 00:15:13.093 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:15:13.093 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:13.093 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2169321 00:15:13.353 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:13.353 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:13.353 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2169321' 00:15:13.353 killing process with pid 2169321 00:15:13.353 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 2169321 00:15:13.353 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.353 00:15:13.353 Latency(us) 00:15:13.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.353 =================================================================================================================== 00:15:13.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.353 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 2169321 00:15:13.353 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.612 12:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:13.871 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:13.871 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:13.871 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:13.871 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:13.871 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2166018 00:15:13.871 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2166018 00:15:14.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2166018 Killed "${NVMF_APP[@]}" "$@" 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2171286 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2171286 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2171286 ']' 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:14.131 12:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:14.131 [2024-06-10 12:04:03.455833] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:14.131 [2024-06-10 12:04:03.455886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.131 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.131 [2024-06-10 12:04:03.531685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.131 [2024-06-10 12:04:03.603395] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.131 [2024-06-10 12:04:03.603435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.131 [2024-06-10 12:04:03.603445] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.131 [2024-06-10 12:04:03.603454] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.131 [2024-06-10 12:04:03.603461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.131 [2024-06-10 12:04:03.603485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:15.069 [2024-06-10 12:04:04.447967] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:15.069 [2024-06-10 12:04:04.448075] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:15.069 [2024-06-10 12:04:04.448101] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c9caa833-053c-47ac-833d-a39eb075107f 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=c9caa833-053c-47ac-833d-a39eb075107f 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:15.069 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:15.327 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c9caa833-053c-47ac-833d-a39eb075107f -t 2000 00:15:15.327 [ 00:15:15.327 { 00:15:15.327 "name": "c9caa833-053c-47ac-833d-a39eb075107f", 00:15:15.327 "aliases": [ 00:15:15.327 "lvs/lvol" 00:15:15.327 ], 00:15:15.327 "product_name": "Logical Volume", 00:15:15.327 "block_size": 4096, 00:15:15.327 "num_blocks": 38912, 00:15:15.327 "uuid": "c9caa833-053c-47ac-833d-a39eb075107f", 00:15:15.327 "assigned_rate_limits": { 00:15:15.327 "rw_ios_per_sec": 0, 00:15:15.327 "rw_mbytes_per_sec": 0, 00:15:15.327 "r_mbytes_per_sec": 0, 00:15:15.327 "w_mbytes_per_sec": 0 00:15:15.327 }, 00:15:15.327 "claimed": false, 00:15:15.327 "zoned": false, 00:15:15.327 "supported_io_types": { 00:15:15.327 "read": true, 00:15:15.327 "write": true, 00:15:15.327 "unmap": true, 00:15:15.327 "write_zeroes": true, 00:15:15.327 "flush": false, 00:15:15.327 "reset": true, 00:15:15.327 "compare": false, 00:15:15.327 "compare_and_write": false, 00:15:15.327 "abort": false, 00:15:15.327 "nvme_admin": false, 00:15:15.327 "nvme_io": false 00:15:15.327 }, 00:15:15.327 "driver_specific": { 00:15:15.327 "lvol": { 00:15:15.327 "lvol_store_uuid": "bd1d05ba-cf38-4828-9053-093dcd7bcad9", 00:15:15.327 "base_bdev": "aio_bdev", 00:15:15.327 "thin_provision": false, 00:15:15.327 "num_allocated_clusters": 38, 00:15:15.327 "snapshot": false, 00:15:15.327 "clone": false, 00:15:15.327 "esnap_clone": false 00:15:15.327 } 00:15:15.328 } 00:15:15.328 } 00:15:15.328 ] 00:15:15.328 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:15.328 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:15.328 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:15.585 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:15.585 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:15.585 12:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:15.885 [2024-06-10 12:04:05.280216] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:15.885 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:16.142 request: 00:15:16.142 { 00:15:16.142 "uuid": "bd1d05ba-cf38-4828-9053-093dcd7bcad9", 00:15:16.142 "method": "bdev_lvol_get_lvstores", 00:15:16.142 "req_id": 1 00:15:16.142 } 00:15:16.142 Got JSON-RPC error response 00:15:16.142 response: 00:15:16.142 { 00:15:16.142 "code": -19, 00:15:16.142 "message": "No such device" 00:15:16.142 } 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:16.142 aio_bdev 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c9caa833-053c-47ac-833d-a39eb075107f 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=c9caa833-053c-47ac-833d-a39eb075107f 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:16.142 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:16.399 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c9caa833-053c-47ac-833d-a39eb075107f -t 2000 00:15:16.656 [ 00:15:16.656 { 00:15:16.656 "name": "c9caa833-053c-47ac-833d-a39eb075107f", 00:15:16.656 "aliases": [ 00:15:16.656 "lvs/lvol" 00:15:16.656 ], 00:15:16.656 "product_name": "Logical Volume", 00:15:16.656 "block_size": 4096, 00:15:16.656 "num_blocks": 38912, 00:15:16.656 "uuid": "c9caa833-053c-47ac-833d-a39eb075107f", 00:15:16.656 "assigned_rate_limits": { 00:15:16.656 "rw_ios_per_sec": 0, 00:15:16.656 "rw_mbytes_per_sec": 0, 00:15:16.656 "r_mbytes_per_sec": 0, 00:15:16.656 "w_mbytes_per_sec": 0 00:15:16.656 }, 00:15:16.656 "claimed": false, 00:15:16.656 "zoned": false, 00:15:16.656 "supported_io_types": { 00:15:16.656 "read": true, 00:15:16.656 "write": true, 00:15:16.656 "unmap": true, 00:15:16.656 "write_zeroes": true, 00:15:16.656 "flush": false, 00:15:16.656 "reset": true, 00:15:16.656 "compare": false, 00:15:16.656 "compare_and_write": false, 00:15:16.656 "abort": false, 00:15:16.656 "nvme_admin": false, 00:15:16.656 "nvme_io": false 00:15:16.656 }, 00:15:16.656 "driver_specific": { 00:15:16.656 "lvol": { 00:15:16.656 "lvol_store_uuid": "bd1d05ba-cf38-4828-9053-093dcd7bcad9", 00:15:16.656 "base_bdev": "aio_bdev", 00:15:16.656 "thin_provision": false, 00:15:16.656 "num_allocated_clusters": 38, 00:15:16.656 "snapshot": false, 00:15:16.656 "clone": false, 00:15:16.656 "esnap_clone": false 00:15:16.656 } 00:15:16.656 } 00:15:16.656 } 00:15:16.656 ] 00:15:16.656 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:16.656 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:16.656 12:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:16.656 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:16.656 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:16.656 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:16.913 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:16.913 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c9caa833-053c-47ac-833d-a39eb075107f 00:15:17.171 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd1d05ba-cf38-4828-9053-093dcd7bcad9 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:17.430 00:15:17.430 real 0m17.423s 00:15:17.430 user 0m43.658s 00:15:17.430 sys 0m4.891s 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:17.430 ************************************ 00:15:17.430 END TEST lvs_grow_dirty 00:15:17.430 ************************************ 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:17.430 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:17.430 nvmf_trace.0 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.689 12:04:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.689 rmmod nvme_tcp 00:15:17.689 rmmod nvme_fabrics 00:15:17.689 rmmod nvme_keyring 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2171286 ']' 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2171286 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 2171286 ']' 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 2171286 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2171286 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2171286' 00:15:17.689 killing process with pid 2171286 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 2171286 00:15:17.689 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 2171286 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.947 12:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.848 12:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.848 00:15:19.848 real 0m43.799s 00:15:19.848 user 1m4.579s 00:15:19.848 sys 0m12.619s 00:15:19.848 12:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:19.848 12:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:19.848 ************************************ 00:15:19.848 END TEST nvmf_lvs_grow 00:15:19.848 ************************************ 00:15:20.106 12:04:09 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:20.106 12:04:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:20.106 12:04:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:20.106 12:04:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.106 ************************************ 00:15:20.106 START TEST nvmf_bdev_io_wait 00:15:20.106 ************************************ 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:20.106 * Looking for test storage... 00:15:20.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.106 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.107 12:04:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:26.762 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:26.762 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:26.762 Found net devices under 0000:af:00.0: cvl_0_0 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:26.762 Found net devices under 0000:af:00.1: cvl_0_1 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.762 12:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:15:26.762 00:15:26.762 --- 10.0.0.2 ping statistics --- 00:15:26.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.762 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:15:26.762 00:15:26.762 --- 10.0.0.1 ping statistics --- 00:15:26.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.762 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.762 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2175744 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2175744 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 2175744 ']' 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:26.763 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:26.763 [2024-06-10 12:04:16.186193] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:26.763 [2024-06-10 12:04:16.186242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.763 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.763 [2024-06-10 12:04:16.260315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.022 [2024-06-10 12:04:16.338627] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.022 [2024-06-10 12:04:16.338669] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.022 [2024-06-10 12:04:16.338678] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.022 [2024-06-10 12:04:16.338688] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.022 [2024-06-10 12:04:16.338695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.022 [2024-06-10 12:04:16.338742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.022 [2024-06-10 12:04:16.338762] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.022 [2024-06-10 12:04:16.338845] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.022 [2024-06-10 12:04:16.338847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.592 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:27.592 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:27.592 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.592 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:27.592 12:04:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.592 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 [2024-06-10 12:04:17.108557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.852 Malloc0 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.852 [2024-06-10 12:04:17.172286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2175791 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2175794 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:27.852 { 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme$subsystem", 00:15:27.852 "trtype": "$TEST_TRANSPORT", 00:15:27.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.852 "adrfam": "ipv4", 00:15:27.852 "trsvcid": "$NVMF_PORT", 00:15:27.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.852 "hdgst": ${hdgst:-false}, 00:15:27.852 "ddgst": ${ddgst:-false} 00:15:27.852 }, 00:15:27.852 "method": "bdev_nvme_attach_controller" 00:15:27.852 } 00:15:27.852 EOF 00:15:27.852 )") 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2175796 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:27.852 { 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme$subsystem", 00:15:27.852 "trtype": "$TEST_TRANSPORT", 00:15:27.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.852 "adrfam": "ipv4", 00:15:27.852 "trsvcid": "$NVMF_PORT", 00:15:27.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.852 "hdgst": ${hdgst:-false}, 00:15:27.852 "ddgst": ${ddgst:-false} 00:15:27.852 }, 00:15:27.852 "method": "bdev_nvme_attach_controller" 00:15:27.852 } 00:15:27.852 EOF 00:15:27.852 )") 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2175799 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:27.852 { 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme$subsystem", 00:15:27.852 "trtype": "$TEST_TRANSPORT", 00:15:27.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.852 "adrfam": "ipv4", 00:15:27.852 "trsvcid": "$NVMF_PORT", 00:15:27.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.852 "hdgst": ${hdgst:-false}, 00:15:27.852 "ddgst": ${ddgst:-false} 00:15:27.852 }, 00:15:27.852 "method": "bdev_nvme_attach_controller" 00:15:27.852 } 00:15:27.852 EOF 00:15:27.852 )") 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:27.852 { 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme$subsystem", 00:15:27.852 "trtype": "$TEST_TRANSPORT", 00:15:27.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.852 "adrfam": "ipv4", 00:15:27.852 "trsvcid": "$NVMF_PORT", 00:15:27.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.852 "hdgst": ${hdgst:-false}, 00:15:27.852 "ddgst": ${ddgst:-false} 00:15:27.852 }, 00:15:27.852 "method": "bdev_nvme_attach_controller" 00:15:27.852 } 00:15:27.852 EOF 00:15:27.852 )") 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2175791 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme1", 00:15:27.852 "trtype": "tcp", 00:15:27.852 "traddr": "10.0.0.2", 00:15:27.852 "adrfam": "ipv4", 00:15:27.852 "trsvcid": "4420", 00:15:27.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.852 "hdgst": false, 00:15:27.852 "ddgst": false 00:15:27.852 }, 00:15:27.852 "method": "bdev_nvme_attach_controller" 00:15:27.852 }' 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme1", 00:15:27.852 "trtype": "tcp", 00:15:27.852 "traddr": "10.0.0.2", 00:15:27.852 "adrfam": "ipv4", 00:15:27.852 "trsvcid": "4420", 00:15:27.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.852 "hdgst": false, 00:15:27.852 "ddgst": false 00:15:27.852 }, 00:15:27.852 "method": "bdev_nvme_attach_controller" 00:15:27.852 }' 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:27.852 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:27.852 "params": { 00:15:27.852 "name": "Nvme1", 00:15:27.853 "trtype": "tcp", 00:15:27.853 "traddr": "10.0.0.2", 00:15:27.853 "adrfam": "ipv4", 00:15:27.853 "trsvcid": "4420", 00:15:27.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.853 "hdgst": false, 00:15:27.853 "ddgst": false 00:15:27.853 }, 00:15:27.853 "method": "bdev_nvme_attach_controller" 00:15:27.853 }' 00:15:27.853 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:27.853 12:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:27.853 "params": { 00:15:27.853 "name": "Nvme1", 00:15:27.853 "trtype": "tcp", 00:15:27.853 "traddr": "10.0.0.2", 00:15:27.853 "adrfam": "ipv4", 00:15:27.853 "trsvcid": "4420", 00:15:27.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.853 "hdgst": false, 00:15:27.853 "ddgst": false 00:15:27.853 }, 00:15:27.853 "method": "bdev_nvme_attach_controller" 00:15:27.853 }' 00:15:27.853 [2024-06-10 12:04:17.218664] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:27.853 [2024-06-10 12:04:17.218715] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:27.853 [2024-06-10 12:04:17.226051] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:27.853 [2024-06-10 12:04:17.226095] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:27.853 [2024-06-10 12:04:17.226882] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:27.853 [2024-06-10 12:04:17.226923] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:27.853 [2024-06-10 12:04:17.227776] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:27.853 [2024-06-10 12:04:17.227823] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:27.853 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.853 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.112 [2024-06-10 12:04:17.397979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.112 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.112 [2024-06-10 12:04:17.472373] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:28.112 [2024-06-10 12:04:17.487219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.112 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.112 [2024-06-10 12:04:17.561613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:15:28.112 [2024-06-10 12:04:17.577991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.372 [2024-06-10 12:04:17.639695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.372 [2024-06-10 12:04:17.660403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:15:28.372 [2024-06-10 12:04:17.713438] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:15:28.372 Running I/O for 1 seconds... 00:15:28.631 Running I/O for 1 seconds... 00:15:28.631 Running I/O for 1 seconds... 00:15:28.631 Running I/O for 1 seconds... 00:15:29.568 00:15:29.568 Latency(us) 00:15:29.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.568 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:29.568 Nvme1n1 : 1.01 9862.80 38.53 0.00 0.00 12915.41 5714.74 19084.08 00:15:29.568 =================================================================================================================== 00:15:29.568 Total : 9862.80 38.53 0.00 0.00 12915.41 5714.74 19084.08 00:15:29.568 00:15:29.568 Latency(us) 00:15:29.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.568 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:29.568 Nvme1n1 : 1.01 8651.35 33.79 0.00 0.00 14754.92 5033.16 28101.84 00:15:29.568 =================================================================================================================== 00:15:29.568 Total : 8651.35 33.79 0.00 0.00 14754.92 5033.16 28101.84 00:15:29.568 00:15:29.568 Latency(us) 00:15:29.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.568 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:29.568 Nvme1n1 : 1.00 259351.94 1013.09 0.00 0.00 491.90 205.62 645.53 00:15:29.568 =================================================================================================================== 00:15:29.568 Total : 259351.94 1013.09 0.00 0.00 491.90 205.62 645.53 00:15:29.568 00:15:29.568 Latency(us) 00:15:29.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.569 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:29.569 Nvme1n1 : 1.01 11718.02 45.77 0.00 0.00 10889.12 5793.38 21495.81 00:15:29.569 =================================================================================================================== 00:15:29.569 Total : 11718.02 45.77 0.00 0.00 10889.12 5793.38 21495.81 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2175794 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2175796 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2175799 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.828 rmmod nvme_tcp 00:15:29.828 rmmod nvme_fabrics 00:15:29.828 rmmod nvme_keyring 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2175744 ']' 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2175744 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 2175744 ']' 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 2175744 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:29.828 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2175744 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2175744' 00:15:30.088 killing process with pid 2175744 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 2175744 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 2175744 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.088 12:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.624 12:04:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.624 00:15:32.624 real 0m12.229s 00:15:32.624 user 0m20.047s 00:15:32.624 sys 0m6.954s 00:15:32.624 12:04:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:32.624 12:04:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.624 ************************************ 00:15:32.624 END TEST nvmf_bdev_io_wait 00:15:32.624 ************************************ 00:15:32.624 12:04:21 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:32.624 12:04:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:32.624 12:04:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:32.624 12:04:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.624 ************************************ 00:15:32.624 START TEST nvmf_queue_depth 00:15:32.624 ************************************ 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:32.624 * Looking for test storage... 00:15:32.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.624 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.625 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.625 12:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.625 12:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:39.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:39.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.198 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:39.199 Found net devices under 0000:af:00.0: cvl_0_0 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:39.199 Found net devices under 0000:af:00.1: cvl_0_1 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.199 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:15:39.459 00:15:39.459 --- 10.0.0.2 ping statistics --- 00:15:39.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.459 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:15:39.459 00:15:39.459 --- 10.0.0.1 ping statistics --- 00:15:39.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.459 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2180037 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2180037 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2180037 ']' 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:39.459 12:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:39.459 [2024-06-10 12:04:28.864792] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:39.459 [2024-06-10 12:04:28.864841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.459 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.459 [2024-06-10 12:04:28.937545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.718 [2024-06-10 12:04:29.004574] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.718 [2024-06-10 12:04:29.004612] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.718 [2024-06-10 12:04:29.004621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.718 [2024-06-10 12:04:29.004629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.718 [2024-06-10 12:04:29.004653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.718 [2024-06-10 12:04:29.004675] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 [2024-06-10 12:04:29.706440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 Malloc0 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 [2024-06-10 12:04:29.756665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2180132 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2180132 /var/tmp/bdevperf.sock 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2180132 ']' 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:40.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.286 12:04:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:40.545 [2024-06-10 12:04:29.807252] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:15:40.545 [2024-06-10 12:04:29.807301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180132 ] 00:15:40.545 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.545 [2024-06-10 12:04:29.878053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.545 [2024-06-10 12:04:29.953022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.114 12:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:41.114 12:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:41.114 12:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:41.114 12:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.114 12:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:41.373 NVMe0n1 00:15:41.373 12:04:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.373 12:04:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:41.632 Running I/O for 10 seconds... 00:15:51.613 00:15:51.613 Latency(us) 00:15:51.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.613 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:51.613 Verification LBA range: start 0x0 length 0x4000 00:15:51.613 NVMe0n1 : 10.06 13094.55 51.15 0.00 0.00 77955.54 21390.95 52638.52 00:15:51.613 =================================================================================================================== 00:15:51.613 Total : 13094.55 51.15 0.00 0.00 77955.54 21390.95 52638.52 00:15:51.613 0 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2180132 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2180132 ']' 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2180132 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2180132 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2180132' 00:15:51.613 killing process with pid 2180132 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2180132 00:15:51.613 Received shutdown signal, test time was about 10.000000 seconds 00:15:51.613 00:15:51.613 Latency(us) 00:15:51.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.613 =================================================================================================================== 00:15:51.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.613 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2180132 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.872 rmmod nvme_tcp 00:15:51.872 rmmod nvme_fabrics 00:15:51.872 rmmod nvme_keyring 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2180037 ']' 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2180037 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2180037 ']' 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2180037 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2180037 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2180037' 00:15:51.872 killing process with pid 2180037 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2180037 00:15:51.872 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2180037 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.130 12:04:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.662 12:04:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.662 00:15:54.662 real 0m21.924s 00:15:54.662 user 0m25.073s 00:15:54.662 sys 0m7.236s 00:15:54.662 12:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:54.662 12:04:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.662 ************************************ 00:15:54.662 END TEST nvmf_queue_depth 00:15:54.662 ************************************ 00:15:54.662 12:04:43 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:54.662 12:04:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:54.662 12:04:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:54.662 12:04:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.662 ************************************ 00:15:54.662 START TEST nvmf_target_multipath 00:15:54.662 ************************************ 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:54.662 * Looking for test storage... 00:15:54.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.662 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.663 12:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.663 12:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.663 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.663 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.663 12:04:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.663 12:04:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.228 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:01.229 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:01.229 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:01.229 Found net devices under 0000:af:00.0: cvl_0_0 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:01.229 Found net devices under 0000:af:00.1: cvl_0_1 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:16:01.229 00:16:01.229 --- 10.0.0.2 ping statistics --- 00:16:01.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.229 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:16:01.229 00:16:01.229 --- 10.0.0.1 ping statistics --- 00:16:01.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.229 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:01.229 only one NIC for nvmf test 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.229 rmmod nvme_tcp 00:16:01.229 rmmod nvme_fabrics 00:16:01.229 rmmod nvme_keyring 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.229 12:04:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:03.802 00:16:03.802 real 0m9.079s 00:16:03.802 user 0m1.840s 00:16:03.802 sys 0m5.254s 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:03.802 12:04:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:03.802 ************************************ 00:16:03.802 END TEST nvmf_target_multipath 00:16:03.802 ************************************ 00:16:03.802 12:04:52 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:03.802 12:04:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:03.802 12:04:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:03.802 12:04:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.802 ************************************ 00:16:03.802 START TEST nvmf_zcopy 00:16:03.802 ************************************ 00:16:03.802 12:04:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:03.802 * Looking for test storage... 00:16:03.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.802 12:04:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.802 12:04:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:03.802 12:04:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.802 12:04:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:10.417 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:10.417 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:10.417 Found net devices under 0000:af:00.0: cvl_0_0 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:10.417 Found net devices under 0000:af:00.1: cvl_0_1 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:10.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:16:10.417 00:16:10.417 --- 10.0.0.2 ping statistics --- 00:16:10.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.417 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:16:10.417 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:10.417 00:16:10.417 --- 10.0.0.1 ping statistics --- 00:16:10.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.418 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.418 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2189553 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2189553 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:10.677 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 2189553 ']' 00:16:10.678 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.678 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:10.678 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.678 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:10.678 12:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:10.678 [2024-06-10 12:05:00.016001] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:10.678 [2024-06-10 12:05:00.016052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.678 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.678 [2024-06-10 12:05:00.092754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.678 [2024-06-10 12:05:00.167511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.678 [2024-06-10 12:05:00.167556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.678 [2024-06-10 12:05:00.167566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.678 [2024-06-10 12:05:00.167574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.678 [2024-06-10 12:05:00.167582] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.678 [2024-06-10 12:05:00.167603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 [2024-06-10 12:05:00.862945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 [2024-06-10 12:05:00.883143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 malloc0 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:11.616 { 00:16:11.616 "params": { 00:16:11.616 "name": "Nvme$subsystem", 00:16:11.616 "trtype": "$TEST_TRANSPORT", 00:16:11.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:11.616 "adrfam": "ipv4", 00:16:11.616 "trsvcid": "$NVMF_PORT", 00:16:11.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:11.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:11.616 "hdgst": ${hdgst:-false}, 00:16:11.616 "ddgst": ${ddgst:-false} 00:16:11.616 }, 00:16:11.616 "method": "bdev_nvme_attach_controller" 00:16:11.616 } 00:16:11.616 EOF 00:16:11.616 )") 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:11.616 12:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:11.616 "params": { 00:16:11.616 "name": "Nvme1", 00:16:11.616 "trtype": "tcp", 00:16:11.616 "traddr": "10.0.0.2", 00:16:11.616 "adrfam": "ipv4", 00:16:11.616 "trsvcid": "4420", 00:16:11.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.616 "hdgst": false, 00:16:11.616 "ddgst": false 00:16:11.616 }, 00:16:11.616 "method": "bdev_nvme_attach_controller" 00:16:11.616 }' 00:16:11.616 [2024-06-10 12:05:00.964338] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:11.616 [2024-06-10 12:05:00.964387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189616 ] 00:16:11.616 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.617 [2024-06-10 12:05:01.034966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.617 [2024-06-10 12:05:01.109441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.875 Running I/O for 10 seconds... 00:16:21.859 00:16:21.859 Latency(us) 00:16:21.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.859 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:21.859 Verification LBA range: start 0x0 length 0x1000 00:16:21.859 Nvme1n1 : 10.01 9046.38 70.67 0.00 0.00 14109.26 2464.15 23802.68 00:16:21.860 =================================================================================================================== 00:16:21.860 Total : 9046.38 70.67 0.00 0.00 14109.26 2464.15 23802.68 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2191465 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:22.119 { 00:16:22.119 "params": { 00:16:22.119 "name": "Nvme$subsystem", 00:16:22.119 "trtype": "$TEST_TRANSPORT", 00:16:22.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.119 "adrfam": "ipv4", 00:16:22.119 "trsvcid": "$NVMF_PORT", 00:16:22.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.119 "hdgst": ${hdgst:-false}, 00:16:22.119 "ddgst": ${ddgst:-false} 00:16:22.119 }, 00:16:22.119 "method": "bdev_nvme_attach_controller" 00:16:22.119 } 00:16:22.119 EOF 00:16:22.119 )") 00:16:22.119 [2024-06-10 12:05:11.479662] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.479696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:22.119 [2024-06-10 12:05:11.491661] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.491674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 12:05:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:22.119 "params": { 00:16:22.119 "name": "Nvme1", 00:16:22.119 "trtype": "tcp", 00:16:22.119 "traddr": "10.0.0.2", 00:16:22.119 "adrfam": "ipv4", 00:16:22.119 "trsvcid": "4420", 00:16:22.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.119 "hdgst": false, 00:16:22.119 "ddgst": false 00:16:22.119 }, 00:16:22.119 "method": "bdev_nvme_attach_controller" 00:16:22.119 }' 00:16:22.119 [2024-06-10 12:05:11.503688] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.503701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.515720] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.515731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.522189] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:22.119 [2024-06-10 12:05:11.522233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191465 ] 00:16:22.119 [2024-06-10 12:05:11.527753] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.527764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.539784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.539795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.551814] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.551829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.119 [2024-06-10 12:05:11.563847] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.563859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.575876] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.575887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.587919] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.587930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.592465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.119 [2024-06-10 12:05:11.599943] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.119 [2024-06-10 12:05:11.599955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.119 [2024-06-10 12:05:11.611974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.120 [2024-06-10 12:05:11.611986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.120 [2024-06-10 12:05:11.624006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.120 [2024-06-10 12:05:11.624017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.120 [2024-06-10 12:05:11.636040] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.120 [2024-06-10 12:05:11.636062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.648069] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.648080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.660101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.660112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.664246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.379 [2024-06-10 12:05:11.672132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.672143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.684173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.684193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.696199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.696214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.708229] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.708242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.720259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.720272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.732288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.732300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.744319] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.744330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.756366] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.756386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.768397] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.768415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.780429] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.780442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.792456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.792466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.804495] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.804522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.816529] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.816543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.828562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.828577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.840602] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.840613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.852635] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.852645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.864671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.864682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.876704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.876717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.379 [2024-06-10 12:05:11.888737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.379 [2024-06-10 12:05:11.888747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.900779] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.900790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.912808] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.912822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.924840] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.924851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.936873] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.936883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.948908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.948918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.960942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.960954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:11.972980] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.972997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 Running I/O for 5 seconds... 00:16:22.639 [2024-06-10 12:05:11.985010] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:11.985020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.001169] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.001189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.014826] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.014846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.028175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.028195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.041668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.041687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.055379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.055399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.069093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.069112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.082951] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.082971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.096481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.096501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.110009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.110029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.123449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.123469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.137440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.137461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.639 [2024-06-10 12:05:12.150851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.639 [2024-06-10 12:05:12.150871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.164590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.898 [2024-06-10 12:05:12.164612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.178415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.898 [2024-06-10 12:05:12.178437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.191901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.898 [2024-06-10 12:05:12.191921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.205843] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.898 [2024-06-10 12:05:12.205864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.219214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.898 [2024-06-10 12:05:12.219235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.232794] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.898 [2024-06-10 12:05:12.232814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.898 [2024-06-10 12:05:12.246467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.246495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.259854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.259875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.273680] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.273700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.287213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.287233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.300752] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.300772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.314360] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.314380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.327503] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.327524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.341189] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.341209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.354513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.354532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.368050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.368069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.381581] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.381601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.395094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.395115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.899 [2024-06-10 12:05:12.408556] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.899 [2024-06-10 12:05:12.408577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.422588] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.422608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.435767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.435787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.449687] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.449707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.463246] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.463266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.476902] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.476921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.490838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.490857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.504535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.504559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.518655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.518675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.529755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.529775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.543960] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.543980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.557518] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.557537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.571213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.571233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.585215] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.585235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.598706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.598726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.612469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.612495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.625949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.625969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.639790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.639809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.653429] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.157 [2024-06-10 12:05:12.653448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.157 [2024-06-10 12:05:12.666868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.158 [2024-06-10 12:05:12.666888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.416 [2024-06-10 12:05:12.680923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.416 [2024-06-10 12:05:12.680942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.416 [2024-06-10 12:05:12.696414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.416 [2024-06-10 12:05:12.696434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.416 [2024-06-10 12:05:12.710001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.416 [2024-06-10 12:05:12.710020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.416 [2024-06-10 12:05:12.723443] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.416 [2024-06-10 12:05:12.723462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.416 [2024-06-10 12:05:12.737098] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.416 [2024-06-10 12:05:12.737116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.416 [2024-06-10 12:05:12.750861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.416 [2024-06-10 12:05:12.750880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.764643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.764667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.778315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.778334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.791933] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.791952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.805838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.805857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.819352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.819371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.832640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.832659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.846189] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.846209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.859742] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.859761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.873305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.873325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.887048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.887068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.901734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.901752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.917303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.917327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.417 [2024-06-10 12:05:12.930773] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.417 [2024-06-10 12:05:12.930792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:12.944315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:12.944334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:12.957991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:12.958010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:12.971094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:12.971113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:12.984564] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:12.984583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:12.998340] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:12.998359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.011658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.011677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.025033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.025056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.038109] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.038128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.051885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.051904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.065376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.065396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.078928] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.078947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.092880] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.092899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.106598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.106618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.120272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.120292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.133556] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.133576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.146937] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.146956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.161120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.161139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.174249] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.174267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.676 [2024-06-10 12:05:13.188223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.676 [2024-06-10 12:05:13.188242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.201796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.201815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.215799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.215818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.229163] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.229182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.242734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.242753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.256638] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.256657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.270181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.270200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.283719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.283742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.297592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.297612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.310953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.310972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.324798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.324817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.338206] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.338225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.352299] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.352319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.365915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.365934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.379562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.379592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.392935] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.392954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.406654] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.406673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.420322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.936 [2024-06-10 12:05:13.420341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.936 [2024-06-10 12:05:13.433796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.937 [2024-06-10 12:05:13.433816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.937 [2024-06-10 12:05:13.446642] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.937 [2024-06-10 12:05:13.446661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.460684] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.460704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.474435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.474455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.488015] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.488035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.501645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.501665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.515325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.515345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.529108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.529130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.542447] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.542473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.555748] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.555768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.569452] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.569472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.583049] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.583068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.596308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.596327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.609809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.609829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.623316] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.623335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.636882] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.636902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.650412] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.650432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.663913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.663932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.677839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.677858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.691451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.691472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.196 [2024-06-10 12:05:13.704988] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.196 [2024-06-10 12:05:13.705007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.718496] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.718516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.731993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.732014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.745930] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.745951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.759455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.759482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.772695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.772716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.786328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.786348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.799782] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.799803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.813413] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.813433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.826973] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.826993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.840352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.840371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.853809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.456 [2024-06-10 12:05:13.853829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.456 [2024-06-10 12:05:13.867117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.867137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.880442] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.880461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.893885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.893904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.907420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.907439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.920945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.920965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.934623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.934643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.947825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.947844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.961177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.961197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.457 [2024-06-10 12:05:13.974522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.457 [2024-06-10 12:05:13.974542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:13.988286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:13.988306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.001994] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.002013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.015514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.015535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.028917] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.028948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.042622] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.042642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.056182] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.056202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.069878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.069898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.083721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.083741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.096978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.096997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.110364] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.110383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.123845] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.123865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.137925] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.137945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.151544] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.151564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.165058] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.165077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.178697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.178716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.192184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.192204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.205473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.205499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.219043] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.219063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.716 [2024-06-10 12:05:14.232632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.716 [2024-06-10 12:05:14.232651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.245969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.245988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.259242] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.259261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.272710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.272729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.286405] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.286424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.299578] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.299597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.313191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.313211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.327088] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.327107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.340426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.340445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.354027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.354047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.367558] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.975 [2024-06-10 12:05:14.367577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.975 [2024-06-10 12:05:14.381173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.381192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.394611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.394630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.408253] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.408271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.421583] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.421602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.435355] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.435373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.448909] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.448928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.462191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.462210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.475924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.475943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.976 [2024-06-10 12:05:14.489014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.976 [2024-06-10 12:05:14.489033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.502910] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.502929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.516414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.516433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.530230] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.530249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.543887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.543906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.557400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.557425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.571083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.571103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.584333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.584352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.597794] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.597813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.611147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.611165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.624968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.624988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.638695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.638713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.651642] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.651661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.665331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.665350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.679036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.679055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.692898] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.692918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.706587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.706608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.719740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.719760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.733288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.733307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.236 [2024-06-10 12:05:14.746549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.236 [2024-06-10 12:05:14.746569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.760107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.760127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.773696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.773716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.787127] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.787146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.800993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.801013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.814593] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.814616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.828028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.828048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.841985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.842004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.855792] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.855811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.869039] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.496 [2024-06-10 12:05:14.869058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.496 [2024-06-10 12:05:14.882547] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.882567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.896454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.896474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.907244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.907264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.921039] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.921059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.934291] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.934311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.948122] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.948143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.961702] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.961722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.975183] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.975202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:14.989306] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:14.989324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.497 [2024-06-10 12:05:15.004567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.497 [2024-06-10 12:05:15.004586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.018308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.018327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.031710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.031729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.044623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.044642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.057871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.057890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.071766] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.071793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.085243] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.085262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.099155] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.099174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.112665] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.112685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.126157] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.126180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.139934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.139953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.153094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.153114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.166851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.166869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.180226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.180245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.193558] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.193577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.207123] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.207143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.220802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.220821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.234033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.234053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.248172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.248191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.261659] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.261678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.756 [2024-06-10 12:05:15.274994] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.756 [2024-06-10 12:05:15.275013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.288388] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.288408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.301904] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.301923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.315382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.315403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.329079] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.329104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.342628] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.342649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.356062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.356082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.369671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.369690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.383078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.383098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.396991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.397012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.410953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.410973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.423965] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.423984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.437811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.437831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.450791] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.450811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.464326] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.464346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.477506] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.477526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.491263] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.491283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.504630] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.504649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.518112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.518132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.016 [2024-06-10 12:05:15.531420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.016 [2024-06-10 12:05:15.531440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.544924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.544944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.558523] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.558542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.571864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.571884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.585362] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.585386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.599142] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.599161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.612595] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.612615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.625856] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.625876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.639750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.639769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.653116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.653135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.666418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.666438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.679691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.679711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.692578] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.692598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.706461] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.706488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.719691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.719711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.733065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.733085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.746236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.746255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.759567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.275 [2024-06-10 12:05:15.759587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.275 [2024-06-10 12:05:15.773232] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.276 [2024-06-10 12:05:15.773252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.276 [2024-06-10 12:05:15.786456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.276 [2024-06-10 12:05:15.786483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.799819] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.799839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.813173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.813193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.826623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.826644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.840878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.840898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.851539] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.851563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.865349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.865369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.534 [2024-06-10 12:05:15.878591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.534 [2024-06-10 12:05:15.878610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.892300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.892319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.905499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.905518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.919341] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.919360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.932700] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.932718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.946357] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.946376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.959579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.959599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.973400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.973419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:15.987073] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:15.987092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:16.000285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:16.000304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:16.013977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:16.013995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:16.027384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:16.027404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:16.040580] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:16.040599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.535 [2024-06-10 12:05:16.053977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.535 [2024-06-10 12:05:16.053996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.067286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.067305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.080540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.080559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.094003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.094022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.107404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.107422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.121277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.121295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.134831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.134850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.148484] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.148503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.161841] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.161860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.174993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.175013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.188771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.188790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.202473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.202497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.215900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.215919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.229765] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.229784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.243670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.243689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.256543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.256562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.269746] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.269766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.282888] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.282907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.296565] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.296584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.794 [2024-06-10 12:05:16.310177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.794 [2024-06-10 12:05:16.310195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.323939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.323958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.337276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.337295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.350562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.350581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.364156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.364175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.377819] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.377838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.391151] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.391170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.404723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.404742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.417769] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.417788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.431174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.431193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.444352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.444371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.457978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.457998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.471356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.471376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.484973] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.484992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.498452] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.498472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.512003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.512023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.525294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.525313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.538839] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.538860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.552835] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.552854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.054 [2024-06-10 12:05:16.566320] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.054 [2024-06-10 12:05:16.566338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.313 [2024-06-10 12:05:16.579755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.313 [2024-06-10 12:05:16.579774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.593458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.593488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.607348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.607368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.620774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.620793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.634181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.634200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.647759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.647778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.661653] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.661672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.672585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.672604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.686697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.686715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.699967] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.699986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.713173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.713192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.727093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.727112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.740446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.740466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.754089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.754109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.767325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.767345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.780631] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.780650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.793871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.793890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.807258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.807294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.314 [2024-06-10 12:05:16.820862] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.314 [2024-06-10 12:05:16.820881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.834078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.834097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.847713] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.847736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.860862] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.860881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.874467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.874496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.888041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.888061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.901733] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.901754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.914790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.914810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.928027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.928047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.941404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.941423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.954867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.954886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.968146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.968166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.981431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.573 [2024-06-10 12:05:16.981450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.573 [2024-06-10 12:05:16.994321] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:16.994341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 00:16:27.574 Latency(us) 00:16:27.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.574 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:27.574 Nvme1n1 : 5.01 17328.46 135.38 0.00 0.00 7379.60 3316.12 17825.79 00:16:27.574 =================================================================================================================== 00:16:27.574 Total : 17328.46 135.38 0.00 0.00 7379.60 3316.12 17825.79 00:16:27.574 [2024-06-10 12:05:17.003673] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.003692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.015703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.015718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.027744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.027762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.039771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.039787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.051804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.051823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.063835] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.063850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.075866] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.075882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.574 [2024-06-10 12:05:17.087901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.574 [2024-06-10 12:05:17.087918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.099942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.099958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.111961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.111973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.123995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.124008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.136025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.136037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.148057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.148069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.160092] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.160106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 [2024-06-10 12:05:17.172124] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.833 [2024-06-10 12:05:17.172135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2191465) - No such process 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2191465 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.833 delay0 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.833 12:05:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:27.833 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.833 [2024-06-10 12:05:17.307414] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:34.398 Initializing NVMe Controllers 00:16:34.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:34.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:34.398 Initialization complete. Launching workers. 00:16:34.398 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 110 00:16:34.398 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 385, failed to submit 45 00:16:34.398 success 208, unsuccess 177, failed 0 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.398 rmmod nvme_tcp 00:16:34.398 rmmod nvme_fabrics 00:16:34.398 rmmod nvme_keyring 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2189553 ']' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2189553 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 2189553 ']' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 2189553 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2189553 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2189553' 00:16:34.398 killing process with pid 2189553 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 2189553 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 2189553 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.398 12:05:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.932 12:05:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:36.932 00:16:36.932 real 0m33.006s 00:16:36.932 user 0m42.329s 00:16:36.932 sys 0m13.338s 00:16:36.932 12:05:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:36.932 12:05:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:36.932 ************************************ 00:16:36.932 END TEST nvmf_zcopy 00:16:36.932 ************************************ 00:16:36.932 12:05:25 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:36.932 12:05:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:36.932 12:05:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:36.932 12:05:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.932 ************************************ 00:16:36.932 START TEST nvmf_nmic 00:16:36.932 ************************************ 00:16:36.932 12:05:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:36.932 * Looking for test storage... 00:16:36.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.932 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.933 12:05:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.499 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:43.500 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:43.500 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:43.500 Found net devices under 0000:af:00.0: cvl_0_0 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:43.500 Found net devices under 0000:af:00.1: cvl_0_1 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:16:43.500 00:16:43.500 --- 10.0.0.2 ping statistics --- 00:16:43.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.500 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:16:43.500 00:16:43.500 --- 10.0.0.1 ping statistics --- 00:16:43.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.500 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2197023 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2197023 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 2197023 ']' 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:43.500 12:05:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.500 [2024-06-10 12:05:32.505954] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:16:43.500 [2024-06-10 12:05:32.506003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.500 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.500 [2024-06-10 12:05:32.580068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.500 [2024-06-10 12:05:32.655994] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.500 [2024-06-10 12:05:32.656034] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.500 [2024-06-10 12:05:32.656043] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.500 [2024-06-10 12:05:32.656052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.500 [2024-06-10 12:05:32.656058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.500 [2024-06-10 12:05:32.656104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.500 [2024-06-10 12:05:32.656121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.500 [2024-06-10 12:05:32.656214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.500 [2024-06-10 12:05:32.656215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.068 [2024-06-10 12:05:33.357362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.068 Malloc0 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.068 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.069 [2024-06-10 12:05:33.411911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:44.069 test case1: single bdev can't be used in multiple subsystems 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.069 [2024-06-10 12:05:33.435804] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:44.069 [2024-06-10 12:05:33.435825] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:44.069 [2024-06-10 12:05:33.435834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.069 request: 00:16:44.069 { 00:16:44.069 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:44.069 "namespace": { 00:16:44.069 "bdev_name": "Malloc0", 00:16:44.069 "no_auto_visible": false 00:16:44.069 }, 00:16:44.069 "method": "nvmf_subsystem_add_ns", 00:16:44.069 "req_id": 1 00:16:44.069 } 00:16:44.069 Got JSON-RPC error response 00:16:44.069 response: 00:16:44.069 { 00:16:44.069 "code": -32602, 00:16:44.069 "message": "Invalid parameters" 00:16:44.069 } 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:44.069 Adding namespace failed - expected result. 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:44.069 test case2: host connect to nvmf target in multiple paths 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:44.069 [2024-06-10 12:05:33.451969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:44.069 12:05:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:45.518 12:05:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:46.896 12:05:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.896 12:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:16:46.896 12:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.896 12:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:46.896 12:05:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:16:48.799 12:05:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:48.799 [global] 00:16:48.799 thread=1 00:16:48.799 invalidate=1 00:16:48.799 rw=write 00:16:48.799 time_based=1 00:16:48.799 runtime=1 00:16:48.799 ioengine=libaio 00:16:48.799 direct=1 00:16:48.799 bs=4096 00:16:48.799 iodepth=1 00:16:48.799 norandommap=0 00:16:48.799 numjobs=1 00:16:48.799 00:16:48.799 verify_dump=1 00:16:48.799 verify_backlog=512 00:16:48.799 verify_state_save=0 00:16:48.799 do_verify=1 00:16:48.799 verify=crc32c-intel 00:16:48.799 [job0] 00:16:48.799 filename=/dev/nvme0n1 00:16:48.799 Could not set queue depth (nvme0n1) 00:16:49.058 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.058 fio-3.35 00:16:49.058 Starting 1 thread 00:16:50.436 00:16:50.436 job0: (groupid=0, jobs=1): err= 0: pid=2198263: Mon Jun 10 12:05:39 2024 00:16:50.436 read: IOPS=129, BW=517KiB/s (530kB/s)(524KiB/1013msec) 00:16:50.436 slat (nsec): min=8585, max=27299, avg=11677.46, stdev=5967.28 00:16:50.436 clat (usec): min=223, max=41931, avg=6791.10, stdev=15024.00 00:16:50.436 lat (usec): min=232, max=41956, avg=6802.78, stdev=15029.57 00:16:50.436 clat percentiles (usec): 00:16:50.436 | 1.00th=[ 225], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 229], 00:16:50.436 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 253], 60.00th=[ 273], 00:16:50.436 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[41157], 95.00th=[41157], 00:16:50.436 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:50.436 | 99.99th=[41681] 00:16:50.436 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:16:50.436 slat (usec): min=11, max=22031, avg=55.68, stdev=973.11 00:16:50.436 clat (usec): min=133, max=380, avg=172.89, stdev=13.74 00:16:50.436 lat (usec): min=145, max=22338, avg=228.57, stdev=979.14 00:16:50.436 clat percentiles (usec): 00:16:50.436 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:16:50.436 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 172], 60.00th=[ 174], 00:16:50.436 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 184], 00:16:50.436 | 99.00th=[ 202], 99.50th=[ 260], 99.90th=[ 379], 99.95th=[ 379], 00:16:50.436 | 99.99th=[ 379] 00:16:50.436 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.436 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.436 lat (usec) : 250=89.27%, 500=7.47% 00:16:50.436 lat (msec) : 50=3.27% 00:16:50.436 cpu : usr=0.40%, sys=0.79%, ctx=645, majf=0, minf=2 00:16:50.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.436 issued rwts: total=131,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.436 00:16:50.436 Run status group 0 (all jobs): 00:16:50.436 READ: bw=517KiB/s (530kB/s), 517KiB/s-517KiB/s (530kB/s-530kB/s), io=524KiB (537kB), run=1013-1013msec 00:16:50.436 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:16:50.436 00:16:50.436 Disk stats (read/write): 00:16:50.436 nvme0n1: ios=154/512, merge=0/0, ticks=1753/86, in_queue=1839, util=98.50% 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:50.436 12:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.695 12:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.695 rmmod nvme_tcp 00:16:50.695 rmmod nvme_fabrics 00:16:50.695 rmmod nvme_keyring 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2197023 ']' 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2197023 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 2197023 ']' 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 2197023 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2197023 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2197023' 00:16:50.695 killing process with pid 2197023 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 2197023 00:16:50.695 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 2197023 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.954 12:05:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.856 12:05:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.856 00:16:52.856 real 0m16.384s 00:16:52.856 user 0m39.561s 00:16:52.856 sys 0m5.925s 00:16:52.856 12:05:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:52.856 12:05:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:52.856 ************************************ 00:16:52.856 END TEST nvmf_nmic 00:16:52.856 ************************************ 00:16:53.115 12:05:42 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:53.115 12:05:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:53.115 12:05:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:53.115 12:05:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.115 ************************************ 00:16:53.115 START TEST nvmf_fio_target 00:16:53.115 ************************************ 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:53.115 * Looking for test storage... 00:16:53.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.115 12:05:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.116 12:05:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:01.235 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:01.235 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:01.235 Found net devices under 0000:af:00.0: cvl_0_0 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:01.235 Found net devices under 0000:af:00.1: cvl_0_1 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:17:01.235 00:17:01.235 --- 10.0.0.2 ping statistics --- 00:17:01.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.235 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:17:01.235 00:17:01.235 --- 10.0.0.1 ping statistics --- 00:17:01.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.235 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2202231 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2202231 00:17:01.235 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 2202231 ']' 00:17:01.236 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.236 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:01.236 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.236 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:01.236 12:05:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.236 [2024-06-10 12:05:49.623845] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:01.236 [2024-06-10 12:05:49.623889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.236 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.236 [2024-06-10 12:05:49.695716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.236 [2024-06-10 12:05:49.769108] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.236 [2024-06-10 12:05:49.769148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.236 [2024-06-10 12:05:49.769157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.236 [2024-06-10 12:05:49.769166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.236 [2024-06-10 12:05:49.769173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.236 [2024-06-10 12:05:49.769221] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.236 [2024-06-10 12:05:49.769300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.236 [2024-06-10 12:05:49.769373] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.236 [2024-06-10 12:05:49.769375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:01.236 [2024-06-10 12:05:50.631723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.236 12:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.494 12:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:01.494 12:05:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.753 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:01.753 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.753 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:01.753 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.012 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:02.012 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:02.271 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.529 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:02.529 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.529 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:02.529 12:05:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.788 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:02.788 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:03.047 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:03.047 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:03.047 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:03.305 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:03.305 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.565 12:05:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.565 [2024-06-10 12:05:53.072041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.824 12:05:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:03.824 12:05:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:04.083 12:05:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.460 12:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:05.460 12:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:17:05.460 12:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.460 12:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:17:05.460 12:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:17:05.460 12:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:17:07.364 12:05:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:07.364 [global] 00:17:07.364 thread=1 00:17:07.364 invalidate=1 00:17:07.364 rw=write 00:17:07.364 time_based=1 00:17:07.364 runtime=1 00:17:07.364 ioengine=libaio 00:17:07.364 direct=1 00:17:07.364 bs=4096 00:17:07.364 iodepth=1 00:17:07.364 norandommap=0 00:17:07.364 numjobs=1 00:17:07.364 00:17:07.364 verify_dump=1 00:17:07.364 verify_backlog=512 00:17:07.364 verify_state_save=0 00:17:07.364 do_verify=1 00:17:07.364 verify=crc32c-intel 00:17:07.364 [job0] 00:17:07.364 filename=/dev/nvme0n1 00:17:07.364 [job1] 00:17:07.364 filename=/dev/nvme0n2 00:17:07.364 [job2] 00:17:07.364 filename=/dev/nvme0n3 00:17:07.364 [job3] 00:17:07.364 filename=/dev/nvme0n4 00:17:07.646 Could not set queue depth (nvme0n1) 00:17:07.646 Could not set queue depth (nvme0n2) 00:17:07.646 Could not set queue depth (nvme0n3) 00:17:07.646 Could not set queue depth (nvme0n4) 00:17:07.911 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.911 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.911 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.911 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.911 fio-3.35 00:17:07.911 Starting 4 threads 00:17:09.311 00:17:09.311 job0: (groupid=0, jobs=1): err= 0: pid=2203765: Mon Jun 10 12:05:58 2024 00:17:09.311 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:09.311 slat (nsec): min=8646, max=41945, avg=9496.62, stdev=1407.50 00:17:09.311 clat (usec): min=189, max=554, avg=267.55, stdev=20.75 00:17:09.311 lat (usec): min=198, max=563, avg=277.04, stdev=20.84 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[ 208], 5.00th=[ 231], 10.00th=[ 243], 20.00th=[ 255], 00:17:09.311 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:17:09.311 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:17:09.311 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 343], 99.95th=[ 375], 00:17:09.311 | 99.99th=[ 553] 00:17:09.311 write: IOPS=2227, BW=8911KiB/s (9125kB/s)(8920KiB/1001msec); 0 zone resets 00:17:09.311 slat (nsec): min=8786, max=60555, avg=13389.42, stdev=2062.30 00:17:09.311 clat (usec): min=131, max=357, avg=175.01, stdev=26.22 00:17:09.311 lat (usec): min=144, max=370, avg=188.40, stdev=26.25 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:17:09.311 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:17:09.311 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 231], 00:17:09.311 | 99.00th=[ 273], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 351], 00:17:09.311 | 99.99th=[ 359] 00:17:09.311 bw ( KiB/s): min= 8224, max= 8224, per=37.48%, avg=8224.00, stdev= 0.00, samples=1 00:17:09.311 iops : min= 2056, max= 2056, avg=2056.00, stdev= 0.00, samples=1 00:17:09.311 lat (usec) : 250=58.04%, 500=41.94%, 750=0.02% 00:17:09.311 cpu : usr=4.00%, sys=7.40%, ctx=4281, majf=0, minf=1 00:17:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 issued rwts: total=2048,2230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.311 job1: (groupid=0, jobs=1): err= 0: pid=2203768: Mon Jun 10 12:05:58 2024 00:17:09.311 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:17:09.311 slat (nsec): min=12334, max=25311, avg=24290.41, stdev=2682.92 00:17:09.311 clat (usec): min=40837, max=41982, avg=41143.19, stdev=378.32 00:17:09.311 lat (usec): min=40862, max=42007, avg=41167.48, stdev=377.39 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:09.311 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:09.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:17:09.311 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:09.311 | 99.99th=[42206] 00:17:09.311 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:17:09.311 slat (usec): min=11, max=186, avg=12.89, stdev= 8.25 00:17:09.311 clat (usec): min=143, max=343, avg=184.70, stdev=32.96 00:17:09.311 lat (usec): min=155, max=529, avg=197.59, stdev=35.87 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:17:09.311 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:17:09.311 | 70.00th=[ 186], 80.00th=[ 217], 90.00th=[ 237], 95.00th=[ 247], 00:17:09.311 | 99.00th=[ 277], 99.50th=[ 326], 99.90th=[ 343], 99.95th=[ 343], 00:17:09.311 | 99.99th=[ 343] 00:17:09.311 bw ( KiB/s): min= 4096, max= 4096, per=18.67%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.311 lat (usec) : 250=91.57%, 500=4.31% 00:17:09.311 lat (msec) : 50=4.12% 00:17:09.311 cpu : usr=0.00%, sys=0.99%, ctx=535, majf=0, minf=2 00:17:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.311 job2: (groupid=0, jobs=1): err= 0: pid=2203769: Mon Jun 10 12:05:58 2024 00:17:09.311 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:17:09.311 slat (nsec): min=11812, max=29102, avg=22936.41, stdev=4198.54 00:17:09.311 clat (usec): min=40870, max=41428, avg=40986.26, stdev=112.65 00:17:09.311 lat (usec): min=40894, max=41440, avg=41009.20, stdev=109.57 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:09.311 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:09.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:09.311 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:09.311 | 99.99th=[41681] 00:17:09.311 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:09.311 slat (nsec): min=12791, max=44665, avg=14172.43, stdev=2208.30 00:17:09.311 clat (usec): min=143, max=674, avg=204.38, stdev=40.24 00:17:09.311 lat (usec): min=157, max=688, avg=218.55, stdev=40.70 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:17:09.311 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:17:09.311 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 253], 00:17:09.311 | 99.00th=[ 302], 99.50th=[ 457], 99.90th=[ 676], 99.95th=[ 676], 00:17:09.311 | 99.99th=[ 676] 00:17:09.311 bw ( KiB/s): min= 4096, max= 4096, per=18.67%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.311 lat (usec) : 250=90.64%, 500=4.87%, 750=0.37% 00:17:09.311 lat (msec) : 50=4.12% 00:17:09.311 cpu : usr=0.49%, sys=0.99%, ctx=535, majf=0, minf=1 00:17:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.311 job3: (groupid=0, jobs=1): err= 0: pid=2203770: Mon Jun 10 12:05:58 2024 00:17:09.311 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:09.311 slat (nsec): min=8765, max=51632, avg=9733.25, stdev=1888.84 00:17:09.311 clat (usec): min=181, max=756, avg=266.28, stdev=50.26 00:17:09.311 lat (usec): min=190, max=766, avg=276.01, stdev=50.35 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:17:09.311 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 269], 00:17:09.311 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 429], 00:17:09.311 | 99.00th=[ 457], 99.50th=[ 461], 99.90th=[ 474], 99.95th=[ 537], 00:17:09.311 | 99.99th=[ 758] 00:17:09.311 write: IOPS=2316, BW=9267KiB/s (9489kB/s)(9276KiB/1001msec); 0 zone resets 00:17:09.311 slat (nsec): min=11803, max=51177, avg=13251.14, stdev=2244.77 00:17:09.311 clat (usec): min=124, max=351, avg=168.82, stdev=14.21 00:17:09.311 lat (usec): min=139, max=364, avg=182.07, stdev=14.52 00:17:09.311 clat percentiles (usec): 00:17:09.311 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:17:09.311 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:17:09.311 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:17:09.311 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 223], 99.95th=[ 322], 00:17:09.311 | 99.99th=[ 351] 00:17:09.311 bw ( KiB/s): min=10720, max=10720, per=48.86%, avg=10720.00, stdev= 0.00, samples=1 00:17:09.311 iops : min= 2680, max= 2680, avg=2680.00, stdev= 0.00, samples=1 00:17:09.311 lat (usec) : 250=72.59%, 500=27.36%, 750=0.02%, 1000=0.02% 00:17:09.311 cpu : usr=4.10%, sys=7.60%, ctx=4368, majf=0, minf=1 00:17:09.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.311 issued rwts: total=2048,2319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.311 00:17:09.311 Run status group 0 (all jobs): 00:17:09.312 READ: bw=15.9MiB/s (16.7MB/s), 86.6KiB/s-8184KiB/s (88.7kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1016msec 00:17:09.312 WRITE: bw=21.4MiB/s (22.5MB/s), 2016KiB/s-9267KiB/s (2064kB/s-9489kB/s), io=21.8MiB (22.8MB), run=1001-1016msec 00:17:09.312 00:17:09.312 Disk stats (read/write): 00:17:09.312 nvme0n1: ios=1559/2005, merge=0/0, ticks=1260/339, in_queue=1599, util=82.85% 00:17:09.312 nvme0n2: ios=67/512, merge=0/0, ticks=767/89, in_queue=856, util=87.79% 00:17:09.312 nvme0n3: ios=39/512, merge=0/0, ticks=1565/98, in_queue=1663, util=90.91% 00:17:09.312 nvme0n4: ios=1676/2048, merge=0/0, ticks=488/330, in_queue=818, util=95.44% 00:17:09.312 12:05:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:09.312 [global] 00:17:09.312 thread=1 00:17:09.312 invalidate=1 00:17:09.312 rw=randwrite 00:17:09.312 time_based=1 00:17:09.312 runtime=1 00:17:09.312 ioengine=libaio 00:17:09.312 direct=1 00:17:09.312 bs=4096 00:17:09.312 iodepth=1 00:17:09.312 norandommap=0 00:17:09.312 numjobs=1 00:17:09.312 00:17:09.312 verify_dump=1 00:17:09.312 verify_backlog=512 00:17:09.312 verify_state_save=0 00:17:09.312 do_verify=1 00:17:09.312 verify=crc32c-intel 00:17:09.312 [job0] 00:17:09.312 filename=/dev/nvme0n1 00:17:09.312 [job1] 00:17:09.312 filename=/dev/nvme0n2 00:17:09.312 [job2] 00:17:09.312 filename=/dev/nvme0n3 00:17:09.312 [job3] 00:17:09.312 filename=/dev/nvme0n4 00:17:09.312 Could not set queue depth (nvme0n1) 00:17:09.312 Could not set queue depth (nvme0n2) 00:17:09.312 Could not set queue depth (nvme0n3) 00:17:09.312 Could not set queue depth (nvme0n4) 00:17:09.569 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.569 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.569 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.569 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.569 fio-3.35 00:17:09.569 Starting 4 threads 00:17:10.966 00:17:10.966 job0: (groupid=0, jobs=1): err= 0: pid=2204195: Mon Jun 10 12:06:00 2024 00:17:10.967 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:17:10.967 slat (nsec): min=6286, max=23895, avg=14282.52, stdev=6045.56 00:17:10.967 clat (usec): min=11345, max=41955, avg=39742.58, stdev=6194.57 00:17:10.967 lat (usec): min=11369, max=41965, avg=39756.86, stdev=6192.51 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[11338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:10.967 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:10.967 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:10.967 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:10.967 | 99.99th=[42206] 00:17:10.967 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:17:10.967 slat (nsec): min=5658, max=26479, avg=7152.95, stdev=1235.41 00:17:10.967 clat (usec): min=169, max=261, avg=194.76, stdev=10.65 00:17:10.967 lat (usec): min=177, max=287, avg=201.92, stdev=10.79 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:17:10.967 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:17:10.967 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:17:10.967 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 262], 99.95th=[ 262], 00:17:10.967 | 99.99th=[ 262] 00:17:10.967 bw ( KiB/s): min= 4096, max= 4096, per=19.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:10.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:10.967 lat (usec) : 250=95.51%, 500=0.19% 00:17:10.967 lat (msec) : 20=0.19%, 50=4.11% 00:17:10.967 cpu : usr=0.20%, sys=0.29%, ctx=537, majf=0, minf=1 00:17:10.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.967 job1: (groupid=0, jobs=1): err= 0: pid=2204196: Mon Jun 10 12:06:00 2024 00:17:10.967 read: IOPS=1801, BW=7205KiB/s (7378kB/s)(7212KiB/1001msec) 00:17:10.967 slat (nsec): min=9571, max=62263, avg=10490.42, stdev=2112.86 00:17:10.967 clat (usec): min=241, max=2126, avg=317.04, stdev=72.94 00:17:10.967 lat (usec): min=251, max=2136, avg=327.53, stdev=72.87 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:17:10.967 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:17:10.967 | 70.00th=[ 310], 80.00th=[ 351], 90.00th=[ 429], 95.00th=[ 437], 00:17:10.967 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 619], 99.95th=[ 2114], 00:17:10.967 | 99.99th=[ 2114] 00:17:10.967 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:10.967 slat (nsec): min=12735, max=39154, avg=14053.21, stdev=1751.81 00:17:10.967 clat (usec): min=136, max=414, avg=179.86, stdev=16.82 00:17:10.967 lat (usec): min=150, max=427, avg=193.91, stdev=17.06 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:17:10.967 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:17:10.967 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:17:10.967 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 285], 99.95th=[ 330], 00:17:10.967 | 99.99th=[ 416] 00:17:10.967 bw ( KiB/s): min= 8192, max= 8192, per=39.60%, avg=8192.00, stdev= 0.00, samples=1 00:17:10.967 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:10.967 lat (usec) : 250=53.28%, 500=46.61%, 750=0.08% 00:17:10.967 lat (msec) : 4=0.03% 00:17:10.967 cpu : usr=4.20%, sys=6.80%, ctx=3852, majf=0, minf=1 00:17:10.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 issued rwts: total=1803,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.967 job2: (groupid=0, jobs=1): err= 0: pid=2204197: Mon Jun 10 12:06:00 2024 00:17:10.967 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:10.967 slat (nsec): min=8657, max=41775, avg=9564.29, stdev=1448.99 00:17:10.967 clat (usec): min=225, max=2509, avg=272.48, stdev=61.04 00:17:10.967 lat (usec): min=234, max=2519, avg=282.05, stdev=61.08 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:17:10.967 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:17:10.967 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 285], 95.00th=[ 293], 00:17:10.967 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 1020], 99.95th=[ 1139], 00:17:10.967 | 99.99th=[ 2507] 00:17:10.967 write: IOPS=2195, BW=8783KiB/s (8994kB/s)(8792KiB/1001msec); 0 zone resets 00:17:10.967 slat (nsec): min=11541, max=44653, avg=12613.59, stdev=1791.50 00:17:10.967 clat (usec): min=137, max=3257, avg=174.09, stdev=67.58 00:17:10.967 lat (usec): min=149, max=3269, avg=186.71, stdev=67.65 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:17:10.967 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:17:10.967 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 198], 00:17:10.967 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 277], 99.95th=[ 343], 00:17:10.967 | 99.99th=[ 3261] 00:17:10.967 bw ( KiB/s): min= 8952, max= 8952, per=43.27%, avg=8952.00, stdev= 0.00, samples=1 00:17:10.967 iops : min= 2238, max= 2238, avg=2238.00, stdev= 0.00, samples=1 00:17:10.967 lat (usec) : 250=55.51%, 500=44.35%, 750=0.05% 00:17:10.967 lat (msec) : 2=0.05%, 4=0.05% 00:17:10.967 cpu : usr=2.20%, sys=8.90%, ctx=4246, majf=0, minf=1 00:17:10.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 issued rwts: total=2048,2198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.967 job3: (groupid=0, jobs=1): err= 0: pid=2204198: Mon Jun 10 12:06:00 2024 00:17:10.967 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:17:10.967 slat (nsec): min=12307, max=25320, avg=24078.00, stdev=2684.16 00:17:10.967 clat (usec): min=40899, max=41215, avg=40980.84, stdev=72.96 00:17:10.967 lat (usec): min=40923, max=41227, avg=41004.91, stdev=71.16 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:10.967 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:10.967 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:10.967 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:10.967 | 99.99th=[41157] 00:17:10.967 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:17:10.967 slat (nsec): min=12182, max=39921, avg=13205.89, stdev=1840.70 00:17:10.967 clat (usec): min=145, max=285, avg=193.60, stdev=14.43 00:17:10.967 lat (usec): min=157, max=325, avg=206.80, stdev=14.89 00:17:10.967 clat percentiles (usec): 00:17:10.967 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 184], 00:17:10.967 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:17:10.967 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:17:10.967 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 285], 99.95th=[ 285], 00:17:10.967 | 99.99th=[ 285] 00:17:10.967 bw ( KiB/s): min= 4096, max= 4096, per=19.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:10.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:10.967 lat (usec) : 250=95.32%, 500=0.56% 00:17:10.967 lat (msec) : 50=4.12% 00:17:10.967 cpu : usr=0.10%, sys=1.39%, ctx=535, majf=0, minf=2 00:17:10.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.967 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.967 00:17:10.967 Run status group 0 (all jobs): 00:17:10.967 READ: bw=14.9MiB/s (15.7MB/s), 87.1KiB/s-8184KiB/s (89.2kB/s-8380kB/s), io=15.2MiB (16.0MB), run=1001-1019msec 00:17:10.967 WRITE: bw=20.2MiB/s (21.2MB/s), 2010KiB/s-8783KiB/s (2058kB/s-8994kB/s), io=20.6MiB (21.6MB), run=1001-1019msec 00:17:10.967 00:17:10.967 Disk stats (read/write): 00:17:10.967 nvme0n1: ios=42/512, merge=0/0, ticks=1644/93, in_queue=1737, util=93.49% 00:17:10.967 nvme0n2: ios=1527/1536, merge=0/0, ticks=1040/253, in_queue=1293, util=99.90% 00:17:10.967 nvme0n3: ios=1574/1964, merge=0/0, ticks=597/319, in_queue=916, util=94.79% 00:17:10.967 nvme0n4: ios=55/512, merge=0/0, ticks=1497/93, in_queue=1590, util=99.45% 00:17:10.967 12:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:10.967 [global] 00:17:10.967 thread=1 00:17:10.967 invalidate=1 00:17:10.967 rw=write 00:17:10.967 time_based=1 00:17:10.967 runtime=1 00:17:10.967 ioengine=libaio 00:17:10.967 direct=1 00:17:10.967 bs=4096 00:17:10.967 iodepth=128 00:17:10.967 norandommap=0 00:17:10.967 numjobs=1 00:17:10.967 00:17:10.967 verify_dump=1 00:17:10.967 verify_backlog=512 00:17:10.967 verify_state_save=0 00:17:10.967 do_verify=1 00:17:10.967 verify=crc32c-intel 00:17:10.967 [job0] 00:17:10.967 filename=/dev/nvme0n1 00:17:10.967 [job1] 00:17:10.967 filename=/dev/nvme0n2 00:17:10.967 [job2] 00:17:10.967 filename=/dev/nvme0n3 00:17:10.967 [job3] 00:17:10.967 filename=/dev/nvme0n4 00:17:10.967 Could not set queue depth (nvme0n1) 00:17:10.967 Could not set queue depth (nvme0n2) 00:17:10.967 Could not set queue depth (nvme0n3) 00:17:10.967 Could not set queue depth (nvme0n4) 00:17:11.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.230 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.230 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.230 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.230 fio-3.35 00:17:11.230 Starting 4 threads 00:17:12.640 00:17:12.640 job0: (groupid=0, jobs=1): err= 0: pid=2204648: Mon Jun 10 12:06:01 2024 00:17:12.640 read: IOPS=5216, BW=20.4MiB/s (21.4MB/s)(20.4MiB/1002msec) 00:17:12.640 slat (usec): min=2, max=11586, avg=90.44, stdev=558.15 00:17:12.640 clat (usec): min=1599, max=37184, avg=11864.19, stdev=4823.63 00:17:12.640 lat (usec): min=1608, max=37666, avg=11954.63, stdev=4855.29 00:17:12.640 clat percentiles (usec): 00:17:12.640 | 1.00th=[ 6456], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9765], 00:17:12.640 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:17:12.640 | 70.00th=[11469], 80.00th=[12518], 90.00th=[16581], 95.00th=[23725], 00:17:12.640 | 99.00th=[32113], 99.50th=[34866], 99.90th=[35914], 99.95th=[36963], 00:17:12.640 | 99.99th=[36963] 00:17:12.640 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:17:12.640 slat (usec): min=2, max=22748, avg=80.18, stdev=550.74 00:17:12.640 clat (usec): min=1403, max=74584, avg=11400.30, stdev=6262.50 00:17:12.640 lat (usec): min=1418, max=74594, avg=11480.48, stdev=6280.61 00:17:12.640 clat percentiles (usec): 00:17:12.640 | 1.00th=[ 6325], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 9241], 00:17:12.641 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:17:12.641 | 70.00th=[10552], 80.00th=[10945], 90.00th=[13435], 95.00th=[19792], 00:17:12.641 | 99.00th=[45351], 99.50th=[45351], 99.90th=[68682], 99.95th=[68682], 00:17:12.641 | 99.99th=[74974] 00:17:12.641 bw ( KiB/s): min=21880, max=23016, per=31.57%, avg=22448.00, stdev=803.27, samples=2 00:17:12.641 iops : min= 5470, max= 5754, avg=5612.00, stdev=200.82, samples=2 00:17:12.641 lat (msec) : 2=0.08%, 4=0.09%, 10=34.21%, 20=60.01%, 50=5.39% 00:17:12.641 lat (msec) : 100=0.21% 00:17:12.641 cpu : usr=5.00%, sys=6.89%, ctx=606, majf=0, minf=1 00:17:12.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.641 issued rwts: total=5227,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.641 job1: (groupid=0, jobs=1): err= 0: pid=2204649: Mon Jun 10 12:06:01 2024 00:17:12.641 read: IOPS=3482, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1008msec) 00:17:12.641 slat (nsec): min=1827, max=23790k, avg=152333.53, stdev=1141706.64 00:17:12.641 clat (usec): min=384, max=61691, avg=20939.29, stdev=11189.70 00:17:12.641 lat (usec): min=1066, max=61761, avg=21091.62, stdev=11258.04 00:17:12.641 clat percentiles (usec): 00:17:12.641 | 1.00th=[ 2671], 5.00th=[ 6980], 10.00th=[11076], 20.00th=[12518], 00:17:12.641 | 30.00th=[13304], 40.00th=[16450], 50.00th=[17433], 60.00th=[20841], 00:17:12.641 | 70.00th=[22676], 80.00th=[27919], 90.00th=[38011], 95.00th=[45351], 00:17:12.641 | 99.00th=[54789], 99.50th=[57410], 99.90th=[57410], 99.95th=[60031], 00:17:12.641 | 99.99th=[61604] 00:17:12.641 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:17:12.641 slat (usec): min=2, max=15030, avg=94.67, stdev=789.74 00:17:12.641 clat (usec): min=359, max=108206, avg=15076.08, stdev=15361.31 00:17:12.641 lat (usec): min=374, max=108210, avg=15170.75, stdev=15382.08 00:17:12.641 clat percentiles (usec): 00:17:12.641 | 1.00th=[ 758], 5.00th=[ 1237], 10.00th=[ 2638], 20.00th=[ 5211], 00:17:12.641 | 30.00th=[ 10159], 40.00th=[ 11469], 50.00th=[ 13829], 60.00th=[ 14353], 00:17:12.641 | 70.00th=[ 15664], 80.00th=[ 18744], 90.00th=[ 22676], 95.00th=[ 30278], 00:17:12.641 | 99.00th=[107480], 99.50th=[107480], 99.90th=[108528], 99.95th=[108528], 00:17:12.641 | 99.99th=[108528] 00:17:12.641 bw ( KiB/s): min=12712, max=15991, per=20.18%, avg=14351.50, stdev=2318.60, samples=2 00:17:12.641 iops : min= 3178, max= 3997, avg=3587.50, stdev=579.12, samples=2 00:17:12.641 lat (usec) : 500=0.08%, 750=0.34%, 1000=0.58% 00:17:12.641 lat (msec) : 2=3.82%, 4=4.02%, 10=10.06%, 20=52.16%, 50=26.09% 00:17:12.641 lat (msec) : 100=2.09%, 250=0.76% 00:17:12.641 cpu : usr=3.38%, sys=5.26%, ctx=252, majf=0, minf=1 00:17:12.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.641 issued rwts: total=3510,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.641 job2: (groupid=0, jobs=1): err= 0: pid=2204659: Mon Jun 10 12:06:01 2024 00:17:12.641 read: IOPS=3803, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1007msec) 00:17:12.641 slat (usec): min=2, max=14072, avg=133.61, stdev=857.49 00:17:12.641 clat (usec): min=3299, max=42459, avg=15607.26, stdev=6109.47 00:17:12.641 lat (usec): min=4383, max=42464, avg=15740.87, stdev=6166.76 00:17:12.641 clat percentiles (usec): 00:17:12.641 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:17:12.641 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13698], 60.00th=[14615], 00:17:12.641 | 70.00th=[15664], 80.00th=[17433], 90.00th=[22152], 95.00th=[30802], 00:17:12.641 | 99.00th=[39060], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:12.641 | 99.99th=[42206] 00:17:12.641 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:17:12.641 slat (usec): min=3, max=9186, avg=112.27, stdev=504.85 00:17:12.641 clat (usec): min=3022, max=42447, avg=16561.90, stdev=8253.46 00:17:12.641 lat (usec): min=3035, max=42452, avg=16674.17, stdev=8303.82 00:17:12.641 clat percentiles (usec): 00:17:12.641 | 1.00th=[ 4555], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[11338], 00:17:12.641 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[14484], 00:17:12.641 | 70.00th=[18744], 80.00th=[24773], 90.00th=[31065], 95.00th=[33162], 00:17:12.641 | 99.00th=[38011], 99.50th=[38011], 99.90th=[41157], 99.95th=[42206], 00:17:12.641 | 99.99th=[42206] 00:17:12.641 bw ( KiB/s): min=13136, max=19632, per=23.04%, avg=16384.00, stdev=4593.37, samples=2 00:17:12.641 iops : min= 3284, max= 4908, avg=4096.00, stdev=1148.34, samples=2 00:17:12.641 lat (msec) : 4=0.21%, 10=9.84%, 20=68.28%, 50=21.66% 00:17:12.641 cpu : usr=3.08%, sys=6.56%, ctx=539, majf=0, minf=1 00:17:12.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.641 issued rwts: total=3830,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.641 job3: (groupid=0, jobs=1): err= 0: pid=2204660: Mon Jun 10 12:06:01 2024 00:17:12.641 read: IOPS=4569, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec) 00:17:12.641 slat (usec): min=2, max=12795, avg=108.38, stdev=734.95 00:17:12.641 clat (usec): min=845, max=38812, avg=13537.51, stdev=4446.50 00:17:12.641 lat (usec): min=4034, max=38823, avg=13645.89, stdev=4498.77 00:17:12.641 clat percentiles (usec): 00:17:12.641 | 1.00th=[ 4490], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[11076], 00:17:12.641 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12649], 00:17:12.641 | 70.00th=[14222], 80.00th=[15664], 90.00th=[18744], 95.00th=[23987], 00:17:12.641 | 99.00th=[26870], 99.50th=[33424], 99.90th=[39060], 99.95th=[39060], 00:17:12.641 | 99.99th=[39060] 00:17:12.641 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:12.641 slat (usec): min=3, max=13977, avg=97.44, stdev=590.60 00:17:12.641 clat (usec): min=398, max=50232, avg=14074.58, stdev=7713.12 00:17:12.641 lat (usec): min=890, max=50240, avg=14172.02, stdev=7761.16 00:17:12.641 clat percentiles (usec): 00:17:12.641 | 1.00th=[ 3228], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[10814], 00:17:12.641 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:17:12.641 | 70.00th=[12649], 80.00th=[15664], 90.00th=[21627], 95.00th=[33817], 00:17:12.641 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49546], 99.95th=[50070], 00:17:12.641 | 99.99th=[50070] 00:17:12.641 bw ( KiB/s): min=16416, max=20480, per=25.94%, avg=18448.00, stdev=2873.68, samples=2 00:17:12.641 iops : min= 4104, max= 5120, avg=4612.00, stdev=718.42, samples=2 00:17:12.641 lat (usec) : 500=0.01%, 1000=0.09% 00:17:12.641 lat (msec) : 2=0.09%, 4=0.52%, 10=10.75%, 20=77.30%, 50=11.20% 00:17:12.641 lat (msec) : 100=0.04% 00:17:12.641 cpu : usr=4.49%, sys=7.09%, ctx=517, majf=0, minf=1 00:17:12.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.641 issued rwts: total=4583,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.641 00:17:12.641 Run status group 0 (all jobs): 00:17:12.641 READ: bw=66.5MiB/s (69.7MB/s), 13.6MiB/s-20.4MiB/s (14.3MB/s-21.4MB/s), io=67.0MiB (70.2MB), run=1002-1008msec 00:17:12.641 WRITE: bw=69.4MiB/s (72.8MB/s), 13.9MiB/s-22.0MiB/s (14.6MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1002-1008msec 00:17:12.641 00:17:12.641 Disk stats (read/write): 00:17:12.642 nvme0n1: ios=4585/4608, merge=0/0, ticks=27277/25935, in_queue=53212, util=83.87% 00:17:12.642 nvme0n2: ios=2665/3072, merge=0/0, ticks=32096/30662, in_queue=62758, util=100.00% 00:17:12.642 nvme0n3: ios=3110/3495, merge=0/0, ticks=46365/54323, in_queue=100688, util=99.79% 00:17:12.642 nvme0n4: ios=3642/3638, merge=0/0, ticks=38110/39104, in_queue=77214, util=99.13% 00:17:12.642 12:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:12.642 [global] 00:17:12.642 thread=1 00:17:12.642 invalidate=1 00:17:12.642 rw=randwrite 00:17:12.642 time_based=1 00:17:12.642 runtime=1 00:17:12.642 ioengine=libaio 00:17:12.642 direct=1 00:17:12.642 bs=4096 00:17:12.642 iodepth=128 00:17:12.642 norandommap=0 00:17:12.642 numjobs=1 00:17:12.642 00:17:12.642 verify_dump=1 00:17:12.642 verify_backlog=512 00:17:12.642 verify_state_save=0 00:17:12.642 do_verify=1 00:17:12.642 verify=crc32c-intel 00:17:12.642 [job0] 00:17:12.642 filename=/dev/nvme0n1 00:17:12.642 [job1] 00:17:12.642 filename=/dev/nvme0n2 00:17:12.642 [job2] 00:17:12.642 filename=/dev/nvme0n3 00:17:12.642 [job3] 00:17:12.642 filename=/dev/nvme0n4 00:17:12.642 Could not set queue depth (nvme0n1) 00:17:12.642 Could not set queue depth (nvme0n2) 00:17:12.642 Could not set queue depth (nvme0n3) 00:17:12.642 Could not set queue depth (nvme0n4) 00:17:12.899 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.899 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.899 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.899 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.899 fio-3.35 00:17:12.899 Starting 4 threads 00:17:14.290 00:17:14.290 job0: (groupid=0, jobs=1): err= 0: pid=2205167: Mon Jun 10 12:06:03 2024 00:17:14.290 read: IOPS=5215, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1006msec) 00:17:14.290 slat (usec): min=2, max=10745, avg=100.04, stdev=713.73 00:17:14.290 clat (usec): min=2298, max=22630, avg=12616.52, stdev=2886.28 00:17:14.290 lat (usec): min=4353, max=22641, avg=12716.56, stdev=2931.01 00:17:14.290 clat percentiles (usec): 00:17:14.290 | 1.00th=[ 5473], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[11076], 00:17:14.290 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:17:14.290 | 70.00th=[12518], 80.00th=[14091], 90.00th=[17171], 95.00th=[19006], 00:17:14.290 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:17:14.290 | 99.99th=[22676] 00:17:14.290 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:17:14.290 slat (usec): min=3, max=9309, avg=77.04, stdev=486.95 00:17:14.290 clat (usec): min=1995, max=22345, avg=10830.48, stdev=2010.87 00:17:14.290 lat (usec): min=2013, max=22349, avg=10907.52, stdev=2069.60 00:17:14.290 clat percentiles (usec): 00:17:14.290 | 1.00th=[ 3949], 5.00th=[ 6259], 10.00th=[ 8094], 20.00th=[10290], 00:17:14.290 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:17:14.290 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12125], 95.00th=[12256], 00:17:14.290 | 99.00th=[12518], 99.50th=[18482], 99.90th=[21890], 99.95th=[22152], 00:17:14.290 | 99.99th=[22414] 00:17:14.290 bw ( KiB/s): min=22152, max=22896, per=27.80%, avg=22524.00, stdev=526.09, samples=2 00:17:14.290 iops : min= 5538, max= 5724, avg=5631.00, stdev=131.52, samples=2 00:17:14.290 lat (msec) : 2=0.02%, 4=0.67%, 10=13.22%, 20=84.46%, 50=1.64% 00:17:14.290 cpu : usr=5.47%, sys=6.87%, ctx=590, majf=0, minf=1 00:17:14.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:14.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.290 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.290 job1: (groupid=0, jobs=1): err= 0: pid=2205168: Mon Jun 10 12:06:03 2024 00:17:14.290 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:17:14.290 slat (nsec): min=1945, max=11681k, avg=104506.54, stdev=737409.53 00:17:14.290 clat (usec): min=4798, max=23475, avg=12920.88, stdev=3068.18 00:17:14.290 lat (usec): min=4811, max=23504, avg=13025.39, stdev=3114.62 00:17:14.290 clat percentiles (usec): 00:17:14.290 | 1.00th=[ 5473], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11076], 00:17:14.290 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:17:14.290 | 70.00th=[13042], 80.00th=[15008], 90.00th=[17957], 95.00th=[19530], 00:17:14.290 | 99.00th=[21627], 99.50th=[21890], 99.90th=[23200], 99.95th=[23200], 00:17:14.290 | 99.99th=[23462] 00:17:14.290 write: IOPS=5540, BW=21.6MiB/s (22.7MB/s)(21.8MiB/1009msec); 0 zone resets 00:17:14.290 slat (usec): min=2, max=9759, avg=75.75, stdev=312.64 00:17:14.290 clat (usec): min=1569, max=23411, avg=11057.44, stdev=2342.54 00:17:14.290 lat (usec): min=1595, max=23426, avg=11133.20, stdev=2369.43 00:17:14.290 clat percentiles (usec): 00:17:14.290 | 1.00th=[ 4178], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 9634], 00:17:14.290 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:17:14.290 | 70.00th=[12256], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:17:14.290 | 99.00th=[17433], 99.50th=[19530], 99.90th=[21890], 99.95th=[23200], 00:17:14.290 | 99.99th=[23462] 00:17:14.290 bw ( KiB/s): min=21512, max=22192, per=26.97%, avg=21852.00, stdev=480.83, samples=2 00:17:14.290 iops : min= 5378, max= 5548, avg=5463.00, stdev=120.21, samples=2 00:17:14.290 lat (msec) : 2=0.07%, 4=0.39%, 10=14.78%, 20=82.70%, 50=2.05% 00:17:14.290 cpu : usr=5.65%, sys=5.56%, ctx=716, majf=0, minf=1 00:17:14.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:14.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.291 issued rwts: total=5120,5590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.291 job2: (groupid=0, jobs=1): err= 0: pid=2205169: Mon Jun 10 12:06:03 2024 00:17:14.291 read: IOPS=4583, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:17:14.291 slat (usec): min=2, max=12663, avg=119.58, stdev=852.61 00:17:14.291 clat (usec): min=3655, max=27155, avg=14912.90, stdev=3602.06 00:17:14.291 lat (usec): min=3665, max=27184, avg=15032.48, stdev=3656.41 00:17:14.291 clat percentiles (usec): 00:17:14.291 | 1.00th=[ 5997], 5.00th=[10028], 10.00th=[11600], 20.00th=[13042], 00:17:14.291 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:17:14.291 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20579], 95.00th=[22676], 00:17:14.291 | 99.00th=[25035], 99.50th=[25560], 99.90th=[26084], 99.95th=[26346], 00:17:14.291 | 99.99th=[27132] 00:17:14.291 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:14.291 slat (usec): min=3, max=11011, avg=89.15, stdev=484.23 00:17:14.291 clat (usec): min=1980, max=26001, avg=12678.59, stdev=2461.78 00:17:14.291 lat (usec): min=1998, max=26006, avg=12767.74, stdev=2512.46 00:17:14.291 clat percentiles (usec): 00:17:14.291 | 1.00th=[ 4621], 5.00th=[ 6718], 10.00th=[ 8848], 20.00th=[11863], 00:17:14.291 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:17:14.291 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14222], 95.00th=[14353], 00:17:14.291 | 99.00th=[14746], 99.50th=[16712], 99.90th=[25560], 99.95th=[26084], 00:17:14.291 | 99.99th=[26084] 00:17:14.291 bw ( KiB/s): min=16656, max=20208, per=22.75%, avg=18432.00, stdev=2511.64, samples=2 00:17:14.291 iops : min= 4164, max= 5052, avg=4608.00, stdev=627.91, samples=2 00:17:14.291 lat (msec) : 2=0.02%, 4=0.40%, 10=8.72%, 20=84.64%, 50=6.22% 00:17:14.291 cpu : usr=4.89%, sys=5.98%, ctx=546, majf=0, minf=1 00:17:14.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:14.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.291 issued rwts: total=4602,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.291 job3: (groupid=0, jobs=1): err= 0: pid=2205170: Mon Jun 10 12:06:03 2024 00:17:14.291 read: IOPS=4513, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1006msec) 00:17:14.291 slat (usec): min=2, max=13384, avg=119.34, stdev=910.00 00:17:14.291 clat (usec): min=4707, max=27967, avg=14954.26, stdev=3399.95 00:17:14.291 lat (usec): min=5052, max=27971, avg=15073.59, stdev=3466.35 00:17:14.291 clat percentiles (usec): 00:17:14.291 | 1.00th=[ 6259], 5.00th=[11338], 10.00th=[12125], 20.00th=[13304], 00:17:14.291 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:17:14.291 | 70.00th=[14615], 80.00th=[16188], 90.00th=[20317], 95.00th=[22676], 00:17:14.291 | 99.00th=[25560], 99.50th=[26346], 99.90th=[27919], 99.95th=[27919], 00:17:14.291 | 99.99th=[27919] 00:17:14.291 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:17:14.291 slat (usec): min=2, max=11391, avg=91.11, stdev=597.77 00:17:14.291 clat (usec): min=2011, max=27964, avg=12956.16, stdev=2985.16 00:17:14.291 lat (usec): min=2029, max=27969, avg=13047.27, stdev=3030.11 00:17:14.291 clat percentiles (usec): 00:17:14.291 | 1.00th=[ 4490], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[10945], 00:17:14.291 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:17:14.291 | 70.00th=[14091], 80.00th=[14353], 90.00th=[15270], 95.00th=[18220], 00:17:14.291 | 99.00th=[22152], 99.50th=[22152], 99.90th=[26346], 99.95th=[26870], 00:17:14.291 | 99.99th=[27919] 00:17:14.291 bw ( KiB/s): min=16984, max=19880, per=22.75%, avg=18432.00, stdev=2047.78, samples=2 00:17:14.291 iops : min= 4246, max= 4970, avg=4608.00, stdev=511.95, samples=2 00:17:14.291 lat (msec) : 4=0.25%, 10=10.19%, 20=83.52%, 50=6.04% 00:17:14.291 cpu : usr=4.68%, sys=6.97%, ctx=447, majf=0, minf=1 00:17:14.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:14.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.291 issued rwts: total=4541,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.291 00:17:14.291 Run status group 0 (all jobs): 00:17:14.291 READ: bw=75.5MiB/s (79.2MB/s), 17.6MiB/s-20.4MiB/s (18.5MB/s-21.4MB/s), io=76.2MiB (79.9MB), run=1004-1009msec 00:17:14.291 WRITE: bw=79.1MiB/s (83.0MB/s), 17.9MiB/s-21.9MiB/s (18.8MB/s-22.9MB/s), io=79.8MiB (83.7MB), run=1004-1009msec 00:17:14.291 00:17:14.291 Disk stats (read/write): 00:17:14.291 nvme0n1: ios=4239/4608, merge=0/0, ticks=52871/48704, in_queue=101575, util=98.80% 00:17:14.291 nvme0n2: ios=4096/4606, merge=0/0, ticks=51174/49703, in_queue=100877, util=84.73% 00:17:14.291 nvme0n3: ios=3643/3975, merge=0/0, ticks=52846/49123, in_queue=101969, util=99.57% 00:17:14.291 nvme0n4: ios=3584/3943, merge=0/0, ticks=51146/49294, in_queue=100440, util=89.41% 00:17:14.291 12:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:14.291 12:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2205433 00:17:14.291 12:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:14.291 12:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:14.291 [global] 00:17:14.291 thread=1 00:17:14.291 invalidate=1 00:17:14.291 rw=read 00:17:14.291 time_based=1 00:17:14.291 runtime=10 00:17:14.291 ioengine=libaio 00:17:14.291 direct=1 00:17:14.291 bs=4096 00:17:14.291 iodepth=1 00:17:14.291 norandommap=1 00:17:14.291 numjobs=1 00:17:14.291 00:17:14.291 [job0] 00:17:14.291 filename=/dev/nvme0n1 00:17:14.291 [job1] 00:17:14.291 filename=/dev/nvme0n2 00:17:14.291 [job2] 00:17:14.291 filename=/dev/nvme0n3 00:17:14.291 [job3] 00:17:14.291 filename=/dev/nvme0n4 00:17:14.291 Could not set queue depth (nvme0n1) 00:17:14.291 Could not set queue depth (nvme0n2) 00:17:14.291 Could not set queue depth (nvme0n3) 00:17:14.291 Could not set queue depth (nvme0n4) 00:17:14.548 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.548 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.548 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.548 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.548 fio-3.35 00:17:14.548 Starting 4 threads 00:17:17.125 12:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:17.125 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=36872192, buflen=4096 00:17:17.125 fio: pid=2205597, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:17.125 12:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:17.382 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42160128, buflen=4096 00:17:17.382 fio: pid=2205596, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:17.382 12:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.382 12:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:17.639 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.639 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:17.639 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8949760, buflen=4096 00:17:17.639 fio: pid=2205594, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:17.896 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5971968, buflen=4096 00:17:17.896 fio: pid=2205595, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:17.896 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.896 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:17.896 00:17:17.896 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2205594: Mon Jun 10 12:06:07 2024 00:17:17.896 read: IOPS=716, BW=2864KiB/s (2932kB/s)(8740KiB/3052msec) 00:17:17.896 slat (usec): min=8, max=6813, avg=13.73, stdev=145.56 00:17:17.896 clat (usec): min=186, max=42150, avg=1371.72, stdev=6561.64 00:17:17.896 lat (usec): min=196, max=48053, avg=1385.45, stdev=6584.37 00:17:17.896 clat percentiles (usec): 00:17:17.896 | 1.00th=[ 219], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 273], 00:17:17.896 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:17:17.896 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:17:17.896 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:17.896 | 99.99th=[42206] 00:17:17.896 bw ( KiB/s): min= 103, max=11696, per=11.98%, avg=3436.60, stdev=4819.29, samples=5 00:17:17.896 iops : min= 25, max= 2924, avg=859.00, stdev=1204.95, samples=5 00:17:17.896 lat (usec) : 250=10.20%, 500=86.96%, 750=0.14% 00:17:17.896 lat (msec) : 50=2.65% 00:17:17.896 cpu : usr=0.29%, sys=0.98%, ctx=2188, majf=0, minf=1 00:17:17.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.896 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.896 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2205595: Mon Jun 10 12:06:07 2024 00:17:17.896 read: IOPS=455, BW=1823KiB/s (1866kB/s)(5832KiB/3200msec) 00:17:17.896 slat (usec): min=3, max=11704, avg=22.47, stdev=363.18 00:17:17.896 clat (usec): min=198, max=43808, avg=2156.20, stdev=8613.70 00:17:17.896 lat (usec): min=204, max=43828, avg=2173.56, stdev=8620.40 00:17:17.896 clat percentiles (usec): 00:17:17.896 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:17:17.896 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:17:17.896 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 420], 00:17:17.896 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[43779], 00:17:17.896 | 99.99th=[43779] 00:17:17.896 bw ( KiB/s): min= 96, max= 7546, per=4.67%, avg=1340.33, stdev=3040.15, samples=6 00:17:17.896 iops : min= 24, max= 1886, avg=335.00, stdev=759.83, samples=6 00:17:17.896 lat (usec) : 250=54.76%, 500=40.44%, 750=0.07% 00:17:17.896 lat (msec) : 50=4.66% 00:17:17.896 cpu : usr=0.19%, sys=0.50%, ctx=1463, majf=0, minf=1 00:17:17.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.896 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.896 issued rwts: total=1459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.896 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2205596: Mon Jun 10 12:06:07 2024 00:17:17.896 read: IOPS=3654, BW=14.3MiB/s (15.0MB/s)(40.2MiB/2817msec) 00:17:17.896 slat (usec): min=8, max=11524, avg=12.29, stdev=156.20 00:17:17.896 clat (usec): min=198, max=3701, avg=257.42, stdev=46.88 00:17:17.896 lat (usec): min=207, max=11904, avg=269.71, stdev=165.00 00:17:17.896 clat percentiles (usec): 00:17:17.896 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:17:17.896 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:17:17.896 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:17:17.897 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 445], 99.95th=[ 506], 00:17:17.897 | 99.99th=[ 2769] 00:17:17.897 bw ( KiB/s): min=14344, max=15000, per=51.55%, avg=14780.80, stdev=257.47, samples=5 00:17:17.897 iops : min= 3586, max= 3750, avg=3695.20, stdev=64.37, samples=5 00:17:17.897 lat (usec) : 250=33.81%, 500=66.13%, 750=0.03% 00:17:17.897 lat (msec) : 2=0.01%, 4=0.02% 00:17:17.897 cpu : usr=2.38%, sys=6.43%, ctx=10296, majf=0, minf=1 00:17:17.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.897 issued rwts: total=10294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.897 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2205597: Mon Jun 10 12:06:07 2024 00:17:17.897 read: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(35.2MiB/2632msec) 00:17:17.897 slat (nsec): min=8798, max=45977, avg=10024.80, stdev=1491.47 00:17:17.897 clat (usec): min=224, max=2212, avg=277.61, stdev=39.99 00:17:17.897 lat (usec): min=235, max=2221, avg=287.63, stdev=40.07 00:17:17.897 clat percentiles (usec): 00:17:17.897 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:17:17.897 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:17:17.897 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:17:17.897 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 461], 99.95th=[ 1418], 00:17:17.897 | 99.99th=[ 2212] 00:17:17.897 bw ( KiB/s): min=13440, max=14296, per=48.21%, avg=13824.00, stdev=316.23, samples=5 00:17:17.897 iops : min= 3360, max= 3574, avg=3456.00, stdev=79.06, samples=5 00:17:17.897 lat (usec) : 250=3.79%, 500=96.10%, 750=0.02%, 1000=0.01% 00:17:17.897 lat (msec) : 2=0.06%, 4=0.01% 00:17:17.897 cpu : usr=2.81%, sys=5.51%, ctx=9004, majf=0, minf=2 00:17:17.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.897 issued rwts: total=9003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.897 00:17:17.897 Run status group 0 (all jobs): 00:17:17.897 READ: bw=28.0MiB/s (29.4MB/s), 1823KiB/s-14.3MiB/s (1866kB/s-15.0MB/s), io=89.6MiB (94.0MB), run=2632-3200msec 00:17:17.897 00:17:17.897 Disk stats (read/write): 00:17:17.897 nvme0n1: ios=2180/0, merge=0/0, ticks=2787/0, in_queue=2787, util=94.29% 00:17:17.897 nvme0n2: ios=1174/0, merge=0/0, ticks=4063/0, in_queue=4063, util=99.22% 00:17:17.897 nvme0n3: ios=9514/0, merge=0/0, ticks=2351/0, in_queue=2351, util=95.94% 00:17:17.897 nvme0n4: ios=8940/0, merge=0/0, ticks=3234/0, in_queue=3234, util=99.03% 00:17:17.897 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:17.897 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:18.153 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.153 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:18.410 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.410 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:18.668 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:18.668 12:06:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:18.668 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:18.668 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2205433 00:17:18.668 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:18.668 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:18.925 nvmf hotplug test: fio failed as expected 00:17:18.925 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.183 rmmod nvme_tcp 00:17:19.183 rmmod nvme_fabrics 00:17:19.183 rmmod nvme_keyring 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2202231 ']' 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2202231 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 2202231 ']' 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 2202231 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2202231 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2202231' 00:17:19.183 killing process with pid 2202231 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 2202231 00:17:19.183 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 2202231 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.441 12:06:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.971 12:06:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:21.971 00:17:21.971 real 0m28.430s 00:17:21.971 user 2m3.135s 00:17:21.971 sys 0m10.661s 00:17:21.971 12:06:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:21.971 12:06:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.971 ************************************ 00:17:21.971 END TEST nvmf_fio_target 00:17:21.971 ************************************ 00:17:21.971 12:06:10 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:21.971 12:06:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:21.971 12:06:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:21.971 12:06:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.971 ************************************ 00:17:21.971 START TEST nvmf_bdevio 00:17:21.971 ************************************ 00:17:21.971 12:06:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:21.971 * Looking for test storage... 00:17:21.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.971 12:06:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.520 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:28.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:28.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:28.521 Found net devices under 0000:af:00.0: cvl_0_0 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:28.521 Found net devices under 0000:af:00.1: cvl_0_1 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:17:28.521 00:17:28.521 --- 10.0.0.2 ping statistics --- 00:17:28.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.521 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:17:28.521 00:17:28.521 --- 10.0.0.1 ping statistics --- 00:17:28.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.521 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2210446 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2210446 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 2210446 ']' 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.521 12:06:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:28.521 [2024-06-10 12:06:17.383007] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:28.521 [2024-06-10 12:06:17.383057] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.521 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.521 [2024-06-10 12:06:17.457837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.521 [2024-06-10 12:06:17.535464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.521 [2024-06-10 12:06:17.535520] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.521 [2024-06-10 12:06:17.535531] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.521 [2024-06-10 12:06:17.535540] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.521 [2024-06-10 12:06:17.535547] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.521 [2024-06-10 12:06:17.535661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:28.521 [2024-06-10 12:06:17.535783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:28.521 [2024-06-10 12:06:17.535891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.521 [2024-06-10 12:06:17.535893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.779 [2024-06-10 12:06:18.225116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.779 Malloc0 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.779 [2024-06-10 12:06:18.279630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.779 { 00:17:28.779 "params": { 00:17:28.779 "name": "Nvme$subsystem", 00:17:28.779 "trtype": "$TEST_TRANSPORT", 00:17:28.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.779 "adrfam": "ipv4", 00:17:28.779 "trsvcid": "$NVMF_PORT", 00:17:28.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.779 "hdgst": ${hdgst:-false}, 00:17:28.779 "ddgst": ${ddgst:-false} 00:17:28.779 }, 00:17:28.779 "method": "bdev_nvme_attach_controller" 00:17:28.779 } 00:17:28.779 EOF 00:17:28.779 )") 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:28.779 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:29.036 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:29.036 12:06:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:29.036 "params": { 00:17:29.036 "name": "Nvme1", 00:17:29.036 "trtype": "tcp", 00:17:29.036 "traddr": "10.0.0.2", 00:17:29.036 "adrfam": "ipv4", 00:17:29.036 "trsvcid": "4420", 00:17:29.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.036 "hdgst": false, 00:17:29.036 "ddgst": false 00:17:29.036 }, 00:17:29.036 "method": "bdev_nvme_attach_controller" 00:17:29.036 }' 00:17:29.036 [2024-06-10 12:06:18.333648] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:17:29.036 [2024-06-10 12:06:18.333698] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210568 ] 00:17:29.036 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.036 [2024-06-10 12:06:18.402575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.036 [2024-06-10 12:06:18.474870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.036 [2024-06-10 12:06:18.474963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.036 [2024-06-10 12:06:18.474965] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.293 I/O targets: 00:17:29.293 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:29.293 00:17:29.293 00:17:29.293 CUnit - A unit testing framework for C - Version 2.1-3 00:17:29.293 http://cunit.sourceforge.net/ 00:17:29.293 00:17:29.293 00:17:29.293 Suite: bdevio tests on: Nvme1n1 00:17:29.293 Test: blockdev write read block ...passed 00:17:29.549 Test: blockdev write zeroes read block ...passed 00:17:29.549 Test: blockdev write zeroes read no split ...passed 00:17:29.549 Test: blockdev write zeroes read split ...passed 00:17:29.549 Test: blockdev write zeroes read split partial ...passed 00:17:29.549 Test: blockdev reset ...[2024-06-10 12:06:18.903662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.549 [2024-06-10 12:06:18.903726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e95b0 (9): Bad file descriptor 00:17:29.549 [2024-06-10 12:06:18.957772] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.549 passed 00:17:29.549 Test: blockdev write read 8 blocks ...passed 00:17:29.549 Test: blockdev write read size > 128k ...passed 00:17:29.549 Test: blockdev write read invalid size ...passed 00:17:29.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.549 Test: blockdev write read max offset ...passed 00:17:29.806 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.806 Test: blockdev writev readv 8 blocks ...passed 00:17:29.806 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.806 Test: blockdev writev readv block ...passed 00:17:29.806 Test: blockdev writev readv size > 128k ...passed 00:17:29.806 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.806 Test: blockdev comparev and writev ...[2024-06-10 12:06:19.169837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.806 [2024-06-10 12:06:19.169866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.806 [2024-06-10 12:06:19.169881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.806 [2024-06-10 12:06:19.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.806 [2024-06-10 12:06:19.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.806 [2024-06-10 12:06:19.170152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.806 [2024-06-10 12:06:19.170166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.806 [2024-06-10 12:06:19.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.806 [2024-06-10 12:06:19.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.806 [2024-06-10 12:06:19.170430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.806 [2024-06-10 12:06:19.170449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.807 [2024-06-10 12:06:19.170458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.807 [2024-06-10 12:06:19.170702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.807 [2024-06-10 12:06:19.170713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.807 [2024-06-10 12:06:19.170727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.807 [2024-06-10 12:06:19.170737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.807 passed 00:17:29.807 Test: blockdev nvme passthru rw ...passed 00:17:29.807 Test: blockdev nvme passthru vendor specific ...[2024-06-10 12:06:19.253833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.807 [2024-06-10 12:06:19.253849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.807 [2024-06-10 12:06:19.253981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.807 [2024-06-10 12:06:19.253993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.807 [2024-06-10 12:06:19.254111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.807 [2024-06-10 12:06:19.254122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.807 [2024-06-10 12:06:19.254245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.807 [2024-06-10 12:06:19.254256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.807 passed 00:17:29.807 Test: blockdev nvme admin passthru ...passed 00:17:29.807 Test: blockdev copy ...passed 00:17:29.807 00:17:29.807 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.807 suites 1 1 n/a 0 0 00:17:29.807 tests 23 23 23 0 0 00:17:29.807 asserts 152 152 152 0 n/a 00:17:29.807 00:17:29.807 Elapsed time = 1.120 seconds 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.064 rmmod nvme_tcp 00:17:30.064 rmmod nvme_fabrics 00:17:30.064 rmmod nvme_keyring 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2210446 ']' 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2210446 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 2210446 ']' 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 2210446 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:30.064 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2210446 00:17:30.321 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2210446' 00:17:30.322 killing process with pid 2210446 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 2210446 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 2210446 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.322 12:06:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.847 12:06:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.847 00:17:32.847 real 0m10.897s 00:17:32.847 user 0m12.856s 00:17:32.847 sys 0m5.406s 00:17:32.847 12:06:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:32.847 12:06:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:32.847 ************************************ 00:17:32.847 END TEST nvmf_bdevio 00:17:32.847 ************************************ 00:17:32.847 12:06:21 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:32.847 12:06:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:32.847 12:06:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:32.847 12:06:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.847 ************************************ 00:17:32.847 START TEST nvmf_auth_target 00:17:32.847 ************************************ 00:17:32.847 12:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:32.847 * Looking for test storage... 00:17:32.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:32.847 12:06:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.848 12:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.401 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.401 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.401 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.401 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.401 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.659 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.659 12:06:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:17:39.659 00:17:39.659 --- 10.0.0.2 ping statistics --- 00:17:39.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.659 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:17:39.659 00:17:39.659 --- 10.0.0.1 ping statistics --- 00:17:39.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.659 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2214526 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2214526 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2214526 ']' 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:39.659 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.593 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:40.593 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:40.593 12:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.593 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:40.593 12:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2214701 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.593 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=16a2be24b2800948dd5a0cfa6889009b1c8463867827d990 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lVL 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 16a2be24b2800948dd5a0cfa6889009b1c8463867827d990 0 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 16a2be24b2800948dd5a0cfa6889009b1c8463867827d990 0 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=16a2be24b2800948dd5a0cfa6889009b1c8463867827d990 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lVL 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lVL 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.lVL 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b214f0899fae79bd758e4cb2bff7b825bfc0450d0d9cbd9fdc4b8bfb53178f1f 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aCP 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b214f0899fae79bd758e4cb2bff7b825bfc0450d0d9cbd9fdc4b8bfb53178f1f 3 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b214f0899fae79bd758e4cb2bff7b825bfc0450d0d9cbd9fdc4b8bfb53178f1f 3 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b214f0899fae79bd758e4cb2bff7b825bfc0450d0d9cbd9fdc4b8bfb53178f1f 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:40.594 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aCP 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aCP 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.aCP 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=134067d476125d3afd15c59671324a0e 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rKn 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 134067d476125d3afd15c59671324a0e 1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 134067d476125d3afd15c59671324a0e 1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=134067d476125d3afd15c59671324a0e 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rKn 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rKn 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.rKn 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d68d0d9cc6bf49db16c0ed47e252106845a211b40624ee91 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MUX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d68d0d9cc6bf49db16c0ed47e252106845a211b40624ee91 2 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d68d0d9cc6bf49db16c0ed47e252106845a211b40624ee91 2 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d68d0d9cc6bf49db16c0ed47e252106845a211b40624ee91 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MUX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MUX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.MUX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a27bbc4e1394314d3d5988f22526530b619e1664e0ea2075 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.z8u 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a27bbc4e1394314d3d5988f22526530b619e1664e0ea2075 2 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a27bbc4e1394314d3d5988f22526530b619e1664e0ea2075 2 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a27bbc4e1394314d3d5988f22526530b619e1664e0ea2075 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.z8u 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.z8u 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.z8u 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53fdebafbdeab97f79c22bb0b3a8f6a1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ebw 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53fdebafbdeab97f79c22bb0b3a8f6a1 1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53fdebafbdeab97f79c22bb0b3a8f6a1 1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53fdebafbdeab97f79c22bb0b3a8f6a1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:40.853 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ebw 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ebw 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Ebw 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b0ad6de21b1f2f5762423114361b1c802cf9893393488426ec03027dfed5430b 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Kbb 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b0ad6de21b1f2f5762423114361b1c802cf9893393488426ec03027dfed5430b 3 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b0ad6de21b1f2f5762423114361b1c802cf9893393488426ec03027dfed5430b 3 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b0ad6de21b1f2f5762423114361b1c802cf9893393488426ec03027dfed5430b 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Kbb 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Kbb 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Kbb 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2214526 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2214526 ']' 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:41.110 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2214701 /var/tmp/host.sock 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2214701 ']' 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:41.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lVL 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.lVL 00:17:41.367 12:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.lVL 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.aCP ]] 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aCP 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aCP 00:17:41.624 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aCP 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rKn 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rKn 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rKn 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.MUX ]] 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MUX 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MUX 00:17:41.881 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MUX 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.z8u 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.z8u 00:17:42.139 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.z8u 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Ebw ]] 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ebw 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ebw 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ebw 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Kbb 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Kbb 00:17:42.395 12:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Kbb 00:17:42.652 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:42.652 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:42.652 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.652 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.652 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.652 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.908 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.165 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.165 { 00:17:43.165 "cntlid": 1, 00:17:43.165 "qid": 0, 00:17:43.165 "state": "enabled", 00:17:43.165 "listen_address": { 00:17:43.165 "trtype": "TCP", 00:17:43.165 "adrfam": "IPv4", 00:17:43.165 "traddr": "10.0.0.2", 00:17:43.165 "trsvcid": "4420" 00:17:43.165 }, 00:17:43.165 "peer_address": { 00:17:43.165 "trtype": "TCP", 00:17:43.165 "adrfam": "IPv4", 00:17:43.165 "traddr": "10.0.0.1", 00:17:43.165 "trsvcid": "47674" 00:17:43.165 }, 00:17:43.165 "auth": { 00:17:43.165 "state": "completed", 00:17:43.165 "digest": "sha256", 00:17:43.165 "dhgroup": "null" 00:17:43.165 } 00:17:43.165 } 00:17:43.165 ]' 00:17:43.165 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.423 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.680 12:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.245 12:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.246 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.246 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.503 00:17:44.503 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.503 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.503 12:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.760 { 00:17:44.760 "cntlid": 3, 00:17:44.760 "qid": 0, 00:17:44.760 "state": "enabled", 00:17:44.760 "listen_address": { 00:17:44.760 "trtype": "TCP", 00:17:44.760 "adrfam": "IPv4", 00:17:44.760 "traddr": "10.0.0.2", 00:17:44.760 "trsvcid": "4420" 00:17:44.760 }, 00:17:44.760 "peer_address": { 00:17:44.760 "trtype": "TCP", 00:17:44.760 "adrfam": "IPv4", 00:17:44.760 "traddr": "10.0.0.1", 00:17:44.760 "trsvcid": "59152" 00:17:44.760 }, 00:17:44.760 "auth": { 00:17:44.760 "state": "completed", 00:17:44.760 "digest": "sha256", 00:17:44.760 "dhgroup": "null" 00:17:44.760 } 00:17:44.760 } 00:17:44.760 ]' 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.760 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.017 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.580 12:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.837 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.094 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.094 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.094 { 00:17:46.094 "cntlid": 5, 00:17:46.094 "qid": 0, 00:17:46.094 "state": "enabled", 00:17:46.094 "listen_address": { 00:17:46.094 "trtype": "TCP", 00:17:46.094 "adrfam": "IPv4", 00:17:46.094 "traddr": "10.0.0.2", 00:17:46.094 "trsvcid": "4420" 00:17:46.094 }, 00:17:46.094 "peer_address": { 00:17:46.094 "trtype": "TCP", 00:17:46.094 "adrfam": "IPv4", 00:17:46.094 "traddr": "10.0.0.1", 00:17:46.094 "trsvcid": "59184" 00:17:46.094 }, 00:17:46.095 "auth": { 00:17:46.095 "state": "completed", 00:17:46.095 "digest": "sha256", 00:17:46.095 "dhgroup": "null" 00:17:46.095 } 00:17:46.095 } 00:17:46.095 ]' 00:17:46.095 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.351 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.608 12:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.226 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.483 00:17:47.483 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.483 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.483 12:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.742 { 00:17:47.742 "cntlid": 7, 00:17:47.742 "qid": 0, 00:17:47.742 "state": "enabled", 00:17:47.742 "listen_address": { 00:17:47.742 "trtype": "TCP", 00:17:47.742 "adrfam": "IPv4", 00:17:47.742 "traddr": "10.0.0.2", 00:17:47.742 "trsvcid": "4420" 00:17:47.742 }, 00:17:47.742 "peer_address": { 00:17:47.742 "trtype": "TCP", 00:17:47.742 "adrfam": "IPv4", 00:17:47.742 "traddr": "10.0.0.1", 00:17:47.742 "trsvcid": "59218" 00:17:47.742 }, 00:17:47.742 "auth": { 00:17:47.742 "state": "completed", 00:17:47.742 "digest": "sha256", 00:17:47.742 "dhgroup": "null" 00:17:47.742 } 00:17:47.742 } 00:17:47.742 ]' 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.742 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.000 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.564 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.565 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.565 12:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.823 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.823 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.079 { 00:17:49.079 "cntlid": 9, 00:17:49.079 "qid": 0, 00:17:49.079 "state": "enabled", 00:17:49.079 "listen_address": { 00:17:49.079 "trtype": "TCP", 00:17:49.079 "adrfam": "IPv4", 00:17:49.079 "traddr": "10.0.0.2", 00:17:49.079 "trsvcid": "4420" 00:17:49.079 }, 00:17:49.079 "peer_address": { 00:17:49.079 "trtype": "TCP", 00:17:49.079 "adrfam": "IPv4", 00:17:49.079 "traddr": "10.0.0.1", 00:17:49.079 "trsvcid": "59242" 00:17:49.079 }, 00:17:49.079 "auth": { 00:17:49.079 "state": "completed", 00:17:49.079 "digest": "sha256", 00:17:49.079 "dhgroup": "ffdhe2048" 00:17:49.079 } 00:17:49.079 } 00:17:49.079 ]' 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.079 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.337 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.337 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.337 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.337 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.337 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.337 12:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.903 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.161 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.425 00:17:50.425 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.425 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.425 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.684 { 00:17:50.684 "cntlid": 11, 00:17:50.684 "qid": 0, 00:17:50.684 "state": "enabled", 00:17:50.684 "listen_address": { 00:17:50.684 "trtype": "TCP", 00:17:50.684 "adrfam": "IPv4", 00:17:50.684 "traddr": "10.0.0.2", 00:17:50.684 "trsvcid": "4420" 00:17:50.684 }, 00:17:50.684 "peer_address": { 00:17:50.684 "trtype": "TCP", 00:17:50.684 "adrfam": "IPv4", 00:17:50.684 "traddr": "10.0.0.1", 00:17:50.684 "trsvcid": "59264" 00:17:50.684 }, 00:17:50.684 "auth": { 00:17:50.684 "state": "completed", 00:17:50.684 "digest": "sha256", 00:17:50.684 "dhgroup": "ffdhe2048" 00:17:50.684 } 00:17:50.684 } 00:17:50.684 ]' 00:17:50.684 12:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.684 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.942 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.508 12:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.766 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.766 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.025 { 00:17:52.025 "cntlid": 13, 00:17:52.025 "qid": 0, 00:17:52.025 "state": "enabled", 00:17:52.025 "listen_address": { 00:17:52.025 "trtype": "TCP", 00:17:52.025 "adrfam": "IPv4", 00:17:52.025 "traddr": "10.0.0.2", 00:17:52.025 "trsvcid": "4420" 00:17:52.025 }, 00:17:52.025 "peer_address": { 00:17:52.025 "trtype": "TCP", 00:17:52.025 "adrfam": "IPv4", 00:17:52.025 "traddr": "10.0.0.1", 00:17:52.025 "trsvcid": "59290" 00:17:52.025 }, 00:17:52.025 "auth": { 00:17:52.025 "state": "completed", 00:17:52.025 "digest": "sha256", 00:17:52.025 "dhgroup": "ffdhe2048" 00:17:52.025 } 00:17:52.025 } 00:17:52.025 ]' 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.025 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.283 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.283 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.283 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.283 12:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.853 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:53.111 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.112 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.112 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.112 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.112 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.370 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.370 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.627 12:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.627 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.627 { 00:17:53.627 "cntlid": 15, 00:17:53.627 "qid": 0, 00:17:53.628 "state": "enabled", 00:17:53.628 "listen_address": { 00:17:53.628 "trtype": "TCP", 00:17:53.628 "adrfam": "IPv4", 00:17:53.628 "traddr": "10.0.0.2", 00:17:53.628 "trsvcid": "4420" 00:17:53.628 }, 00:17:53.628 "peer_address": { 00:17:53.628 "trtype": "TCP", 00:17:53.628 "adrfam": "IPv4", 00:17:53.628 "traddr": "10.0.0.1", 00:17:53.628 "trsvcid": "59312" 00:17:53.628 }, 00:17:53.628 "auth": { 00:17:53.628 "state": "completed", 00:17:53.628 "digest": "sha256", 00:17:53.628 "dhgroup": "ffdhe2048" 00:17:53.628 } 00:17:53.628 } 00:17:53.628 ]' 00:17:53.628 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.628 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.628 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.628 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.628 12:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.628 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.628 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.628 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.885 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.451 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.452 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.452 12:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.452 12:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.452 12:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.452 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.452 12:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.710 00:17:54.710 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.710 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.710 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.968 { 00:17:54.968 "cntlid": 17, 00:17:54.968 "qid": 0, 00:17:54.968 "state": "enabled", 00:17:54.968 "listen_address": { 00:17:54.968 "trtype": "TCP", 00:17:54.968 "adrfam": "IPv4", 00:17:54.968 "traddr": "10.0.0.2", 00:17:54.968 "trsvcid": "4420" 00:17:54.968 }, 00:17:54.968 "peer_address": { 00:17:54.968 "trtype": "TCP", 00:17:54.968 "adrfam": "IPv4", 00:17:54.968 "traddr": "10.0.0.1", 00:17:54.968 "trsvcid": "46298" 00:17:54.968 }, 00:17:54.968 "auth": { 00:17:54.968 "state": "completed", 00:17:54.968 "digest": "sha256", 00:17:54.968 "dhgroup": "ffdhe3072" 00:17:54.968 } 00:17:54.968 } 00:17:54.968 ]' 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.968 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.227 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.227 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.227 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.227 12:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.791 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.049 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.307 00:17:56.307 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.307 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.307 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.565 { 00:17:56.565 "cntlid": 19, 00:17:56.565 "qid": 0, 00:17:56.565 "state": "enabled", 00:17:56.565 "listen_address": { 00:17:56.565 "trtype": "TCP", 00:17:56.565 "adrfam": "IPv4", 00:17:56.565 "traddr": "10.0.0.2", 00:17:56.565 "trsvcid": "4420" 00:17:56.565 }, 00:17:56.565 "peer_address": { 00:17:56.565 "trtype": "TCP", 00:17:56.565 "adrfam": "IPv4", 00:17:56.565 "traddr": "10.0.0.1", 00:17:56.565 "trsvcid": "46334" 00:17:56.565 }, 00:17:56.565 "auth": { 00:17:56.565 "state": "completed", 00:17:56.565 "digest": "sha256", 00:17:56.565 "dhgroup": "ffdhe3072" 00:17:56.565 } 00:17:56.565 } 00:17:56.565 ]' 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.565 12:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.565 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.565 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.565 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.823 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.389 12:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.647 12:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.647 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.647 12:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.647 00:17:57.647 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.647 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.647 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.905 { 00:17:57.905 "cntlid": 21, 00:17:57.905 "qid": 0, 00:17:57.905 "state": "enabled", 00:17:57.905 "listen_address": { 00:17:57.905 "trtype": "TCP", 00:17:57.905 "adrfam": "IPv4", 00:17:57.905 "traddr": "10.0.0.2", 00:17:57.905 "trsvcid": "4420" 00:17:57.905 }, 00:17:57.905 "peer_address": { 00:17:57.905 "trtype": "TCP", 00:17:57.905 "adrfam": "IPv4", 00:17:57.905 "traddr": "10.0.0.1", 00:17:57.905 "trsvcid": "46356" 00:17:57.905 }, 00:17:57.905 "auth": { 00:17:57.905 "state": "completed", 00:17:57.905 "digest": "sha256", 00:17:57.905 "dhgroup": "ffdhe3072" 00:17:57.905 } 00:17:57.905 } 00:17:57.905 ]' 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.905 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.162 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.162 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.162 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.162 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.162 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.162 12:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:17:58.728 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.728 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:58.728 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.728 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.728 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.728 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.729 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.729 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.987 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.245 00:17:59.245 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.245 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.245 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.503 { 00:17:59.503 "cntlid": 23, 00:17:59.503 "qid": 0, 00:17:59.503 "state": "enabled", 00:17:59.503 "listen_address": { 00:17:59.503 "trtype": "TCP", 00:17:59.503 "adrfam": "IPv4", 00:17:59.503 "traddr": "10.0.0.2", 00:17:59.503 "trsvcid": "4420" 00:17:59.503 }, 00:17:59.503 "peer_address": { 00:17:59.503 "trtype": "TCP", 00:17:59.503 "adrfam": "IPv4", 00:17:59.503 "traddr": "10.0.0.1", 00:17:59.503 "trsvcid": "46392" 00:17:59.503 }, 00:17:59.503 "auth": { 00:17:59.503 "state": "completed", 00:17:59.503 "digest": "sha256", 00:17:59.503 "dhgroup": "ffdhe3072" 00:17:59.503 } 00:17:59.503 } 00:17:59.503 ]' 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.503 12:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.761 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.326 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.584 12:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.842 00:18:00.842 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.842 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.842 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.842 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.842 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.099 12:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.099 12:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.099 12:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.099 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.099 { 00:18:01.099 "cntlid": 25, 00:18:01.099 "qid": 0, 00:18:01.099 "state": "enabled", 00:18:01.099 "listen_address": { 00:18:01.099 "trtype": "TCP", 00:18:01.099 "adrfam": "IPv4", 00:18:01.099 "traddr": "10.0.0.2", 00:18:01.099 "trsvcid": "4420" 00:18:01.099 }, 00:18:01.099 "peer_address": { 00:18:01.099 "trtype": "TCP", 00:18:01.099 "adrfam": "IPv4", 00:18:01.099 "traddr": "10.0.0.1", 00:18:01.099 "trsvcid": "46412" 00:18:01.099 }, 00:18:01.099 "auth": { 00:18:01.100 "state": "completed", 00:18:01.100 "digest": "sha256", 00:18:01.100 "dhgroup": "ffdhe4096" 00:18:01.100 } 00:18:01.100 } 00:18:01.100 ]' 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.100 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.358 12:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.923 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.181 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.439 { 00:18:02.439 "cntlid": 27, 00:18:02.439 "qid": 0, 00:18:02.439 "state": "enabled", 00:18:02.439 "listen_address": { 00:18:02.439 "trtype": "TCP", 00:18:02.439 "adrfam": "IPv4", 00:18:02.439 "traddr": "10.0.0.2", 00:18:02.439 "trsvcid": "4420" 00:18:02.439 }, 00:18:02.439 "peer_address": { 00:18:02.439 "trtype": "TCP", 00:18:02.439 "adrfam": "IPv4", 00:18:02.439 "traddr": "10.0.0.1", 00:18:02.439 "trsvcid": "46454" 00:18:02.439 }, 00:18:02.439 "auth": { 00:18:02.439 "state": "completed", 00:18:02.439 "digest": "sha256", 00:18:02.439 "dhgroup": "ffdhe4096" 00:18:02.439 } 00:18:02.439 } 00:18:02.439 ]' 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.439 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.697 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.697 12:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.697 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.697 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.697 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.697 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.262 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.519 12:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.777 00:18:03.777 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.777 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.777 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.035 { 00:18:04.035 "cntlid": 29, 00:18:04.035 "qid": 0, 00:18:04.035 "state": "enabled", 00:18:04.035 "listen_address": { 00:18:04.035 "trtype": "TCP", 00:18:04.035 "adrfam": "IPv4", 00:18:04.035 "traddr": "10.0.0.2", 00:18:04.035 "trsvcid": "4420" 00:18:04.035 }, 00:18:04.035 "peer_address": { 00:18:04.035 "trtype": "TCP", 00:18:04.035 "adrfam": "IPv4", 00:18:04.035 "traddr": "10.0.0.1", 00:18:04.035 "trsvcid": "46480" 00:18:04.035 }, 00:18:04.035 "auth": { 00:18:04.035 "state": "completed", 00:18:04.035 "digest": "sha256", 00:18:04.035 "dhgroup": "ffdhe4096" 00:18:04.035 } 00:18:04.035 } 00:18:04.035 ]' 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.035 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.292 12:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.858 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.116 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.374 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.374 12:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.632 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.632 { 00:18:05.632 "cntlid": 31, 00:18:05.632 "qid": 0, 00:18:05.632 "state": "enabled", 00:18:05.632 "listen_address": { 00:18:05.632 "trtype": "TCP", 00:18:05.632 "adrfam": "IPv4", 00:18:05.632 "traddr": "10.0.0.2", 00:18:05.632 "trsvcid": "4420" 00:18:05.632 }, 00:18:05.632 "peer_address": { 00:18:05.632 "trtype": "TCP", 00:18:05.632 "adrfam": "IPv4", 00:18:05.632 "traddr": "10.0.0.1", 00:18:05.632 "trsvcid": "60426" 00:18:05.632 }, 00:18:05.632 "auth": { 00:18:05.632 "state": "completed", 00:18:05.632 "digest": "sha256", 00:18:05.632 "dhgroup": "ffdhe4096" 00:18:05.632 } 00:18:05.632 } 00:18:05.632 ]' 00:18:05.632 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.632 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.632 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.632 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.632 12:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.632 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.632 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.632 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.890 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.455 12:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.713 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.972 { 00:18:06.972 "cntlid": 33, 00:18:06.972 "qid": 0, 00:18:06.972 "state": "enabled", 00:18:06.972 "listen_address": { 00:18:06.972 "trtype": "TCP", 00:18:06.972 "adrfam": "IPv4", 00:18:06.972 "traddr": "10.0.0.2", 00:18:06.972 "trsvcid": "4420" 00:18:06.972 }, 00:18:06.972 "peer_address": { 00:18:06.972 "trtype": "TCP", 00:18:06.972 "adrfam": "IPv4", 00:18:06.972 "traddr": "10.0.0.1", 00:18:06.972 "trsvcid": "60452" 00:18:06.972 }, 00:18:06.972 "auth": { 00:18:06.972 "state": "completed", 00:18:06.972 "digest": "sha256", 00:18:06.972 "dhgroup": "ffdhe6144" 00:18:06.972 } 00:18:06.972 } 00:18:06.972 ]' 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.972 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.230 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.230 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.230 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.230 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.231 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.231 12:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:07.798 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.798 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.798 12:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.798 12:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.134 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.409 00:18:08.409 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.409 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.409 12:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.667 { 00:18:08.667 "cntlid": 35, 00:18:08.667 "qid": 0, 00:18:08.667 "state": "enabled", 00:18:08.667 "listen_address": { 00:18:08.667 "trtype": "TCP", 00:18:08.667 "adrfam": "IPv4", 00:18:08.667 "traddr": "10.0.0.2", 00:18:08.667 "trsvcid": "4420" 00:18:08.667 }, 00:18:08.667 "peer_address": { 00:18:08.667 "trtype": "TCP", 00:18:08.667 "adrfam": "IPv4", 00:18:08.667 "traddr": "10.0.0.1", 00:18:08.667 "trsvcid": "60480" 00:18:08.667 }, 00:18:08.667 "auth": { 00:18:08.667 "state": "completed", 00:18:08.667 "digest": "sha256", 00:18:08.667 "dhgroup": "ffdhe6144" 00:18:08.667 } 00:18:08.667 } 00:18:08.667 ]' 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.667 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.925 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.492 12:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.751 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.009 00:18:10.009 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.009 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.009 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.267 { 00:18:10.267 "cntlid": 37, 00:18:10.267 "qid": 0, 00:18:10.267 "state": "enabled", 00:18:10.267 "listen_address": { 00:18:10.267 "trtype": "TCP", 00:18:10.267 "adrfam": "IPv4", 00:18:10.267 "traddr": "10.0.0.2", 00:18:10.267 "trsvcid": "4420" 00:18:10.267 }, 00:18:10.267 "peer_address": { 00:18:10.267 "trtype": "TCP", 00:18:10.267 "adrfam": "IPv4", 00:18:10.267 "traddr": "10.0.0.1", 00:18:10.267 "trsvcid": "60504" 00:18:10.267 }, 00:18:10.267 "auth": { 00:18:10.267 "state": "completed", 00:18:10.267 "digest": "sha256", 00:18:10.267 "dhgroup": "ffdhe6144" 00:18:10.267 } 00:18:10.267 } 00:18:10.267 ]' 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.267 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.525 12:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.089 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.347 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:11.347 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.347 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.347 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.347 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.348 12:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.605 00:18:11.605 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.605 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.605 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.863 { 00:18:11.863 "cntlid": 39, 00:18:11.863 "qid": 0, 00:18:11.863 "state": "enabled", 00:18:11.863 "listen_address": { 00:18:11.863 "trtype": "TCP", 00:18:11.863 "adrfam": "IPv4", 00:18:11.863 "traddr": "10.0.0.2", 00:18:11.863 "trsvcid": "4420" 00:18:11.863 }, 00:18:11.863 "peer_address": { 00:18:11.863 "trtype": "TCP", 00:18:11.863 "adrfam": "IPv4", 00:18:11.863 "traddr": "10.0.0.1", 00:18:11.863 "trsvcid": "60532" 00:18:11.863 }, 00:18:11.863 "auth": { 00:18:11.863 "state": "completed", 00:18:11.863 "digest": "sha256", 00:18:11.863 "dhgroup": "ffdhe6144" 00:18:11.863 } 00:18:11.863 } 00:18:11.863 ]' 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.863 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.121 12:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.686 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.943 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.200 00:18:13.200 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.200 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.200 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.458 { 00:18:13.458 "cntlid": 41, 00:18:13.458 "qid": 0, 00:18:13.458 "state": "enabled", 00:18:13.458 "listen_address": { 00:18:13.458 "trtype": "TCP", 00:18:13.458 "adrfam": "IPv4", 00:18:13.458 "traddr": "10.0.0.2", 00:18:13.458 "trsvcid": "4420" 00:18:13.458 }, 00:18:13.458 "peer_address": { 00:18:13.458 "trtype": "TCP", 00:18:13.458 "adrfam": "IPv4", 00:18:13.458 "traddr": "10.0.0.1", 00:18:13.458 "trsvcid": "60562" 00:18:13.458 }, 00:18:13.458 "auth": { 00:18:13.458 "state": "completed", 00:18:13.458 "digest": "sha256", 00:18:13.458 "dhgroup": "ffdhe8192" 00:18:13.458 } 00:18:13.458 } 00:18:13.458 ]' 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.458 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.715 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.716 12:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.716 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.716 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.716 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.716 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:14.280 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.281 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.539 12:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.105 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.105 { 00:18:15.105 "cntlid": 43, 00:18:15.105 "qid": 0, 00:18:15.105 "state": "enabled", 00:18:15.105 "listen_address": { 00:18:15.105 "trtype": "TCP", 00:18:15.105 "adrfam": "IPv4", 00:18:15.105 "traddr": "10.0.0.2", 00:18:15.105 "trsvcid": "4420" 00:18:15.105 }, 00:18:15.105 "peer_address": { 00:18:15.105 "trtype": "TCP", 00:18:15.105 "adrfam": "IPv4", 00:18:15.105 "traddr": "10.0.0.1", 00:18:15.105 "trsvcid": "51584" 00:18:15.105 }, 00:18:15.105 "auth": { 00:18:15.105 "state": "completed", 00:18:15.105 "digest": "sha256", 00:18:15.105 "dhgroup": "ffdhe8192" 00:18:15.105 } 00:18:15.105 } 00:18:15.105 ]' 00:18:15.105 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.363 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.620 12:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.185 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.186 12:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.751 00:18:16.751 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.751 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.751 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.010 { 00:18:17.010 "cntlid": 45, 00:18:17.010 "qid": 0, 00:18:17.010 "state": "enabled", 00:18:17.010 "listen_address": { 00:18:17.010 "trtype": "TCP", 00:18:17.010 "adrfam": "IPv4", 00:18:17.010 "traddr": "10.0.0.2", 00:18:17.010 "trsvcid": "4420" 00:18:17.010 }, 00:18:17.010 "peer_address": { 00:18:17.010 "trtype": "TCP", 00:18:17.010 "adrfam": "IPv4", 00:18:17.010 "traddr": "10.0.0.1", 00:18:17.010 "trsvcid": "51602" 00:18:17.010 }, 00:18:17.010 "auth": { 00:18:17.010 "state": "completed", 00:18:17.010 "digest": "sha256", 00:18:17.010 "dhgroup": "ffdhe8192" 00:18:17.010 } 00:18:17.010 } 00:18:17.010 ]' 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.010 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.267 12:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.829 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.393 00:18:18.393 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.393 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.393 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.651 { 00:18:18.651 "cntlid": 47, 00:18:18.651 "qid": 0, 00:18:18.651 "state": "enabled", 00:18:18.651 "listen_address": { 00:18:18.651 "trtype": "TCP", 00:18:18.651 "adrfam": "IPv4", 00:18:18.651 "traddr": "10.0.0.2", 00:18:18.651 "trsvcid": "4420" 00:18:18.651 }, 00:18:18.651 "peer_address": { 00:18:18.651 "trtype": "TCP", 00:18:18.651 "adrfam": "IPv4", 00:18:18.651 "traddr": "10.0.0.1", 00:18:18.651 "trsvcid": "51624" 00:18:18.651 }, 00:18:18.651 "auth": { 00:18:18.651 "state": "completed", 00:18:18.651 "digest": "sha256", 00:18:18.651 "dhgroup": "ffdhe8192" 00:18:18.651 } 00:18:18.651 } 00:18:18.651 ]' 00:18:18.651 12:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.651 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.909 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.475 12:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.732 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.733 12:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.733 12:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.733 12:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.733 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.733 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.991 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.991 { 00:18:19.991 "cntlid": 49, 00:18:19.991 "qid": 0, 00:18:19.991 "state": "enabled", 00:18:19.991 "listen_address": { 00:18:19.991 "trtype": "TCP", 00:18:19.991 "adrfam": "IPv4", 00:18:19.991 "traddr": "10.0.0.2", 00:18:19.991 "trsvcid": "4420" 00:18:19.991 }, 00:18:19.991 "peer_address": { 00:18:19.991 "trtype": "TCP", 00:18:19.991 "adrfam": "IPv4", 00:18:19.991 "traddr": "10.0.0.1", 00:18:19.991 "trsvcid": "51660" 00:18:19.991 }, 00:18:19.991 "auth": { 00:18:19.991 "state": "completed", 00:18:19.991 "digest": "sha384", 00:18:19.991 "dhgroup": "null" 00:18:19.991 } 00:18:19.991 } 00:18:19.991 ]' 00:18:19.991 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.249 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.507 12:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.072 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.330 00:18:21.330 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.330 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.330 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.587 { 00:18:21.587 "cntlid": 51, 00:18:21.587 "qid": 0, 00:18:21.587 "state": "enabled", 00:18:21.587 "listen_address": { 00:18:21.587 "trtype": "TCP", 00:18:21.587 "adrfam": "IPv4", 00:18:21.587 "traddr": "10.0.0.2", 00:18:21.587 "trsvcid": "4420" 00:18:21.587 }, 00:18:21.587 "peer_address": { 00:18:21.587 "trtype": "TCP", 00:18:21.587 "adrfam": "IPv4", 00:18:21.587 "traddr": "10.0.0.1", 00:18:21.587 "trsvcid": "51684" 00:18:21.587 }, 00:18:21.587 "auth": { 00:18:21.587 "state": "completed", 00:18:21.587 "digest": "sha384", 00:18:21.587 "dhgroup": "null" 00:18:21.587 } 00:18:21.587 } 00:18:21.587 ]' 00:18:21.587 12:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.587 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.588 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.588 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.588 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.588 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.588 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.588 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.844 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.407 12:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.665 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.921 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.921 { 00:18:22.921 "cntlid": 53, 00:18:22.921 "qid": 0, 00:18:22.921 "state": "enabled", 00:18:22.921 "listen_address": { 00:18:22.921 "trtype": "TCP", 00:18:22.921 "adrfam": "IPv4", 00:18:22.921 "traddr": "10.0.0.2", 00:18:22.921 "trsvcid": "4420" 00:18:22.921 }, 00:18:22.921 "peer_address": { 00:18:22.921 "trtype": "TCP", 00:18:22.921 "adrfam": "IPv4", 00:18:22.921 "traddr": "10.0.0.1", 00:18:22.921 "trsvcid": "51720" 00:18:22.921 }, 00:18:22.921 "auth": { 00:18:22.921 "state": "completed", 00:18:22.921 "digest": "sha384", 00:18:22.921 "dhgroup": "null" 00:18:22.921 } 00:18:22.921 } 00:18:22.921 ]' 00:18:22.921 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.177 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.177 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.177 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.178 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.178 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.178 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.178 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.435 12:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.000 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.257 00:18:24.257 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.257 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.257 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.515 { 00:18:24.515 "cntlid": 55, 00:18:24.515 "qid": 0, 00:18:24.515 "state": "enabled", 00:18:24.515 "listen_address": { 00:18:24.515 "trtype": "TCP", 00:18:24.515 "adrfam": "IPv4", 00:18:24.515 "traddr": "10.0.0.2", 00:18:24.515 "trsvcid": "4420" 00:18:24.515 }, 00:18:24.515 "peer_address": { 00:18:24.515 "trtype": "TCP", 00:18:24.515 "adrfam": "IPv4", 00:18:24.515 "traddr": "10.0.0.1", 00:18:24.515 "trsvcid": "59034" 00:18:24.515 }, 00:18:24.515 "auth": { 00:18:24.515 "state": "completed", 00:18:24.515 "digest": "sha384", 00:18:24.515 "dhgroup": "null" 00:18:24.515 } 00:18:24.515 } 00:18:24.515 ]' 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.515 12:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.515 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.515 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.772 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.772 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.338 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.596 12:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.852 00:18:25.852 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.852 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.852 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.852 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.110 { 00:18:26.110 "cntlid": 57, 00:18:26.110 "qid": 0, 00:18:26.110 "state": "enabled", 00:18:26.110 "listen_address": { 00:18:26.110 "trtype": "TCP", 00:18:26.110 "adrfam": "IPv4", 00:18:26.110 "traddr": "10.0.0.2", 00:18:26.110 "trsvcid": "4420" 00:18:26.110 }, 00:18:26.110 "peer_address": { 00:18:26.110 "trtype": "TCP", 00:18:26.110 "adrfam": "IPv4", 00:18:26.110 "traddr": "10.0.0.1", 00:18:26.110 "trsvcid": "59062" 00:18:26.110 }, 00:18:26.110 "auth": { 00:18:26.110 "state": "completed", 00:18:26.110 "digest": "sha384", 00:18:26.110 "dhgroup": "ffdhe2048" 00:18:26.110 } 00:18:26.110 } 00:18:26.110 ]' 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.110 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.368 12:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.933 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.190 00:18:27.190 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.190 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.190 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.448 { 00:18:27.448 "cntlid": 59, 00:18:27.448 "qid": 0, 00:18:27.448 "state": "enabled", 00:18:27.448 "listen_address": { 00:18:27.448 "trtype": "TCP", 00:18:27.448 "adrfam": "IPv4", 00:18:27.448 "traddr": "10.0.0.2", 00:18:27.448 "trsvcid": "4420" 00:18:27.448 }, 00:18:27.448 "peer_address": { 00:18:27.448 "trtype": "TCP", 00:18:27.448 "adrfam": "IPv4", 00:18:27.448 "traddr": "10.0.0.1", 00:18:27.448 "trsvcid": "59078" 00:18:27.448 }, 00:18:27.448 "auth": { 00:18:27.448 "state": "completed", 00:18:27.448 "digest": "sha384", 00:18:27.448 "dhgroup": "ffdhe2048" 00:18:27.448 } 00:18:27.448 } 00:18:27.448 ]' 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.448 12:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.705 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.271 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.529 12:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.789 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.789 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.789 { 00:18:28.789 "cntlid": 61, 00:18:28.789 "qid": 0, 00:18:28.789 "state": "enabled", 00:18:28.789 "listen_address": { 00:18:28.789 "trtype": "TCP", 00:18:28.789 "adrfam": "IPv4", 00:18:28.789 "traddr": "10.0.0.2", 00:18:28.789 "trsvcid": "4420" 00:18:28.789 }, 00:18:28.789 "peer_address": { 00:18:28.789 "trtype": "TCP", 00:18:28.789 "adrfam": "IPv4", 00:18:28.789 "traddr": "10.0.0.1", 00:18:28.789 "trsvcid": "59096" 00:18:28.789 }, 00:18:28.789 "auth": { 00:18:28.789 "state": "completed", 00:18:28.789 "digest": "sha384", 00:18:28.789 "dhgroup": "ffdhe2048" 00:18:28.789 } 00:18:28.789 } 00:18:28.789 ]' 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.086 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.362 12:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.928 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.185 00:18:30.185 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.185 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.185 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.442 { 00:18:30.442 "cntlid": 63, 00:18:30.442 "qid": 0, 00:18:30.442 "state": "enabled", 00:18:30.442 "listen_address": { 00:18:30.442 "trtype": "TCP", 00:18:30.442 "adrfam": "IPv4", 00:18:30.442 "traddr": "10.0.0.2", 00:18:30.442 "trsvcid": "4420" 00:18:30.442 }, 00:18:30.442 "peer_address": { 00:18:30.442 "trtype": "TCP", 00:18:30.442 "adrfam": "IPv4", 00:18:30.442 "traddr": "10.0.0.1", 00:18:30.442 "trsvcid": "59110" 00:18:30.442 }, 00:18:30.442 "auth": { 00:18:30.442 "state": "completed", 00:18:30.442 "digest": "sha384", 00:18:30.442 "dhgroup": "ffdhe2048" 00:18:30.442 } 00:18:30.442 } 00:18:30.442 ]' 00:18:30.442 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.443 12:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.700 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:31.265 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.265 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:31.265 12:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.266 12:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.266 12:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.266 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.266 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.266 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.266 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.524 12:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.782 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.782 { 00:18:31.782 "cntlid": 65, 00:18:31.782 "qid": 0, 00:18:31.782 "state": "enabled", 00:18:31.782 "listen_address": { 00:18:31.782 "trtype": "TCP", 00:18:31.782 "adrfam": "IPv4", 00:18:31.782 "traddr": "10.0.0.2", 00:18:31.782 "trsvcid": "4420" 00:18:31.782 }, 00:18:31.782 "peer_address": { 00:18:31.782 "trtype": "TCP", 00:18:31.782 "adrfam": "IPv4", 00:18:31.782 "traddr": "10.0.0.1", 00:18:31.782 "trsvcid": "59134" 00:18:31.782 }, 00:18:31.782 "auth": { 00:18:31.782 "state": "completed", 00:18:31.782 "digest": "sha384", 00:18:31.782 "dhgroup": "ffdhe3072" 00:18:31.782 } 00:18:31.782 } 00:18:31.782 ]' 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.782 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.040 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.040 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.040 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.040 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.040 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.040 12:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:32.606 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.606 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:32.606 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.606 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.606 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.606 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.607 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.607 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.863 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.121 00:18:33.121 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.121 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.121 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.379 { 00:18:33.379 "cntlid": 67, 00:18:33.379 "qid": 0, 00:18:33.379 "state": "enabled", 00:18:33.379 "listen_address": { 00:18:33.379 "trtype": "TCP", 00:18:33.379 "adrfam": "IPv4", 00:18:33.379 "traddr": "10.0.0.2", 00:18:33.379 "trsvcid": "4420" 00:18:33.379 }, 00:18:33.379 "peer_address": { 00:18:33.379 "trtype": "TCP", 00:18:33.379 "adrfam": "IPv4", 00:18:33.379 "traddr": "10.0.0.1", 00:18:33.379 "trsvcid": "59158" 00:18:33.379 }, 00:18:33.379 "auth": { 00:18:33.379 "state": "completed", 00:18:33.379 "digest": "sha384", 00:18:33.379 "dhgroup": "ffdhe3072" 00:18:33.379 } 00:18:33.379 } 00:18:33.379 ]' 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.379 12:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.636 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.202 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.460 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.460 12:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.718 { 00:18:34.718 "cntlid": 69, 00:18:34.718 "qid": 0, 00:18:34.718 "state": "enabled", 00:18:34.718 "listen_address": { 00:18:34.718 "trtype": "TCP", 00:18:34.718 "adrfam": "IPv4", 00:18:34.718 "traddr": "10.0.0.2", 00:18:34.718 "trsvcid": "4420" 00:18:34.718 }, 00:18:34.718 "peer_address": { 00:18:34.718 "trtype": "TCP", 00:18:34.718 "adrfam": "IPv4", 00:18:34.718 "traddr": "10.0.0.1", 00:18:34.718 "trsvcid": "60590" 00:18:34.718 }, 00:18:34.718 "auth": { 00:18:34.718 "state": "completed", 00:18:34.718 "digest": "sha384", 00:18:34.718 "dhgroup": "ffdhe3072" 00:18:34.718 } 00:18:34.718 } 00:18:34.718 ]' 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.718 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.976 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.976 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.976 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.976 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.976 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.977 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:35.543 12:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.543 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.801 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.059 00:18:36.059 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.059 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.060 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.317 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.317 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.317 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.317 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.317 12:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.317 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.317 { 00:18:36.317 "cntlid": 71, 00:18:36.317 "qid": 0, 00:18:36.317 "state": "enabled", 00:18:36.317 "listen_address": { 00:18:36.317 "trtype": "TCP", 00:18:36.317 "adrfam": "IPv4", 00:18:36.317 "traddr": "10.0.0.2", 00:18:36.317 "trsvcid": "4420" 00:18:36.317 }, 00:18:36.317 "peer_address": { 00:18:36.317 "trtype": "TCP", 00:18:36.317 "adrfam": "IPv4", 00:18:36.317 "traddr": "10.0.0.1", 00:18:36.317 "trsvcid": "60624" 00:18:36.317 }, 00:18:36.317 "auth": { 00:18:36.317 "state": "completed", 00:18:36.317 "digest": "sha384", 00:18:36.317 "dhgroup": "ffdhe3072" 00:18:36.317 } 00:18:36.317 } 00:18:36.317 ]' 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.318 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.576 12:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.141 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.399 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.657 00:18:37.657 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.657 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.657 12:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.657 { 00:18:37.657 "cntlid": 73, 00:18:37.657 "qid": 0, 00:18:37.657 "state": "enabled", 00:18:37.657 "listen_address": { 00:18:37.657 "trtype": "TCP", 00:18:37.657 "adrfam": "IPv4", 00:18:37.657 "traddr": "10.0.0.2", 00:18:37.657 "trsvcid": "4420" 00:18:37.657 }, 00:18:37.657 "peer_address": { 00:18:37.657 "trtype": "TCP", 00:18:37.657 "adrfam": "IPv4", 00:18:37.657 "traddr": "10.0.0.1", 00:18:37.657 "trsvcid": "60650" 00:18:37.657 }, 00:18:37.657 "auth": { 00:18:37.657 "state": "completed", 00:18:37.657 "digest": "sha384", 00:18:37.657 "dhgroup": "ffdhe4096" 00:18:37.657 } 00:18:37.657 } 00:18:37.657 ]' 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.657 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.914 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.914 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.914 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.914 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.914 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.914 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.480 12:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.738 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.995 00:18:38.995 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.995 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.995 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.251 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.251 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.252 { 00:18:39.252 "cntlid": 75, 00:18:39.252 "qid": 0, 00:18:39.252 "state": "enabled", 00:18:39.252 "listen_address": { 00:18:39.252 "trtype": "TCP", 00:18:39.252 "adrfam": "IPv4", 00:18:39.252 "traddr": "10.0.0.2", 00:18:39.252 "trsvcid": "4420" 00:18:39.252 }, 00:18:39.252 "peer_address": { 00:18:39.252 "trtype": "TCP", 00:18:39.252 "adrfam": "IPv4", 00:18:39.252 "traddr": "10.0.0.1", 00:18:39.252 "trsvcid": "60682" 00:18:39.252 }, 00:18:39.252 "auth": { 00:18:39.252 "state": "completed", 00:18:39.252 "digest": "sha384", 00:18:39.252 "dhgroup": "ffdhe4096" 00:18:39.252 } 00:18:39.252 } 00:18:39.252 ]' 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.252 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.508 12:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.072 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.329 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.586 00:18:40.586 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.586 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.586 12:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.844 { 00:18:40.844 "cntlid": 77, 00:18:40.844 "qid": 0, 00:18:40.844 "state": "enabled", 00:18:40.844 "listen_address": { 00:18:40.844 "trtype": "TCP", 00:18:40.844 "adrfam": "IPv4", 00:18:40.844 "traddr": "10.0.0.2", 00:18:40.844 "trsvcid": "4420" 00:18:40.844 }, 00:18:40.844 "peer_address": { 00:18:40.844 "trtype": "TCP", 00:18:40.844 "adrfam": "IPv4", 00:18:40.844 "traddr": "10.0.0.1", 00:18:40.844 "trsvcid": "60712" 00:18:40.844 }, 00:18:40.844 "auth": { 00:18:40.844 "state": "completed", 00:18:40.844 "digest": "sha384", 00:18:40.844 "dhgroup": "ffdhe4096" 00:18:40.844 } 00:18:40.844 } 00:18:40.844 ]' 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.844 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.102 12:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.667 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.924 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.925 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.925 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.183 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.183 { 00:18:42.183 "cntlid": 79, 00:18:42.183 "qid": 0, 00:18:42.183 "state": "enabled", 00:18:42.183 "listen_address": { 00:18:42.183 "trtype": "TCP", 00:18:42.183 "adrfam": "IPv4", 00:18:42.183 "traddr": "10.0.0.2", 00:18:42.183 "trsvcid": "4420" 00:18:42.183 }, 00:18:42.183 "peer_address": { 00:18:42.183 "trtype": "TCP", 00:18:42.183 "adrfam": "IPv4", 00:18:42.183 "traddr": "10.0.0.1", 00:18:42.183 "trsvcid": "60740" 00:18:42.183 }, 00:18:42.183 "auth": { 00:18:42.183 "state": "completed", 00:18:42.183 "digest": "sha384", 00:18:42.183 "dhgroup": "ffdhe4096" 00:18:42.183 } 00:18:42.183 } 00:18:42.183 ]' 00:18:42.183 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.440 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.699 12:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.266 12:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.525 00:18:43.525 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.525 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.525 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.783 { 00:18:43.783 "cntlid": 81, 00:18:43.783 "qid": 0, 00:18:43.783 "state": "enabled", 00:18:43.783 "listen_address": { 00:18:43.783 "trtype": "TCP", 00:18:43.783 "adrfam": "IPv4", 00:18:43.783 "traddr": "10.0.0.2", 00:18:43.783 "trsvcid": "4420" 00:18:43.783 }, 00:18:43.783 "peer_address": { 00:18:43.783 "trtype": "TCP", 00:18:43.783 "adrfam": "IPv4", 00:18:43.783 "traddr": "10.0.0.1", 00:18:43.783 "trsvcid": "60756" 00:18:43.783 }, 00:18:43.783 "auth": { 00:18:43.783 "state": "completed", 00:18:43.783 "digest": "sha384", 00:18:43.783 "dhgroup": "ffdhe6144" 00:18:43.783 } 00:18:43.783 } 00:18:43.783 ]' 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.783 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.041 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.041 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.041 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.041 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.041 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.041 12:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.607 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.865 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.123 00:18:45.123 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.123 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.123 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.380 { 00:18:45.380 "cntlid": 83, 00:18:45.380 "qid": 0, 00:18:45.380 "state": "enabled", 00:18:45.380 "listen_address": { 00:18:45.380 "trtype": "TCP", 00:18:45.380 "adrfam": "IPv4", 00:18:45.380 "traddr": "10.0.0.2", 00:18:45.380 "trsvcid": "4420" 00:18:45.380 }, 00:18:45.380 "peer_address": { 00:18:45.380 "trtype": "TCP", 00:18:45.380 "adrfam": "IPv4", 00:18:45.380 "traddr": "10.0.0.1", 00:18:45.380 "trsvcid": "33610" 00:18:45.380 }, 00:18:45.380 "auth": { 00:18:45.380 "state": "completed", 00:18:45.380 "digest": "sha384", 00:18:45.380 "dhgroup": "ffdhe6144" 00:18:45.380 } 00:18:45.380 } 00:18:45.380 ]' 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.380 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.637 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.637 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.637 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.637 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.637 12:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.637 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.203 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.461 12:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.719 00:18:46.719 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.719 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.719 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.978 { 00:18:46.978 "cntlid": 85, 00:18:46.978 "qid": 0, 00:18:46.978 "state": "enabled", 00:18:46.978 "listen_address": { 00:18:46.978 "trtype": "TCP", 00:18:46.978 "adrfam": "IPv4", 00:18:46.978 "traddr": "10.0.0.2", 00:18:46.978 "trsvcid": "4420" 00:18:46.978 }, 00:18:46.978 "peer_address": { 00:18:46.978 "trtype": "TCP", 00:18:46.978 "adrfam": "IPv4", 00:18:46.978 "traddr": "10.0.0.1", 00:18:46.978 "trsvcid": "33638" 00:18:46.978 }, 00:18:46.978 "auth": { 00:18:46.978 "state": "completed", 00:18:46.978 "digest": "sha384", 00:18:46.978 "dhgroup": "ffdhe6144" 00:18:46.978 } 00:18:46.978 } 00:18:46.978 ]' 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.978 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.236 12:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.803 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.060 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.318 00:18:48.318 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.318 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.318 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.576 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.577 { 00:18:48.577 "cntlid": 87, 00:18:48.577 "qid": 0, 00:18:48.577 "state": "enabled", 00:18:48.577 "listen_address": { 00:18:48.577 "trtype": "TCP", 00:18:48.577 "adrfam": "IPv4", 00:18:48.577 "traddr": "10.0.0.2", 00:18:48.577 "trsvcid": "4420" 00:18:48.577 }, 00:18:48.577 "peer_address": { 00:18:48.577 "trtype": "TCP", 00:18:48.577 "adrfam": "IPv4", 00:18:48.577 "traddr": "10.0.0.1", 00:18:48.577 "trsvcid": "33664" 00:18:48.577 }, 00:18:48.577 "auth": { 00:18:48.577 "state": "completed", 00:18:48.577 "digest": "sha384", 00:18:48.577 "dhgroup": "ffdhe6144" 00:18:48.577 } 00:18:48.577 } 00:18:48.577 ]' 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.577 12:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.577 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.577 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.577 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.577 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.835 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.403 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.404 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.404 12:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.404 12:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.661 12:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.661 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.661 12:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.920 00:18:49.920 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.920 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.920 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.188 { 00:18:50.188 "cntlid": 89, 00:18:50.188 "qid": 0, 00:18:50.188 "state": "enabled", 00:18:50.188 "listen_address": { 00:18:50.188 "trtype": "TCP", 00:18:50.188 "adrfam": "IPv4", 00:18:50.188 "traddr": "10.0.0.2", 00:18:50.188 "trsvcid": "4420" 00:18:50.188 }, 00:18:50.188 "peer_address": { 00:18:50.188 "trtype": "TCP", 00:18:50.188 "adrfam": "IPv4", 00:18:50.188 "traddr": "10.0.0.1", 00:18:50.188 "trsvcid": "33688" 00:18:50.188 }, 00:18:50.188 "auth": { 00:18:50.188 "state": "completed", 00:18:50.188 "digest": "sha384", 00:18:50.188 "dhgroup": "ffdhe8192" 00:18:50.188 } 00:18:50.188 } 00:18:50.188 ]' 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.188 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.497 12:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.063 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.321 12:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.578 00:18:51.578 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.578 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.578 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.836 { 00:18:51.836 "cntlid": 91, 00:18:51.836 "qid": 0, 00:18:51.836 "state": "enabled", 00:18:51.836 "listen_address": { 00:18:51.836 "trtype": "TCP", 00:18:51.836 "adrfam": "IPv4", 00:18:51.836 "traddr": "10.0.0.2", 00:18:51.836 "trsvcid": "4420" 00:18:51.836 }, 00:18:51.836 "peer_address": { 00:18:51.836 "trtype": "TCP", 00:18:51.836 "adrfam": "IPv4", 00:18:51.836 "traddr": "10.0.0.1", 00:18:51.836 "trsvcid": "33702" 00:18:51.836 }, 00:18:51.836 "auth": { 00:18:51.836 "state": "completed", 00:18:51.836 "digest": "sha384", 00:18:51.836 "dhgroup": "ffdhe8192" 00:18:51.836 } 00:18:51.836 } 00:18:51.836 ]' 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.836 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.094 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.094 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.094 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.094 12:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.659 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.916 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.481 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.481 { 00:18:53.481 "cntlid": 93, 00:18:53.481 "qid": 0, 00:18:53.481 "state": "enabled", 00:18:53.481 "listen_address": { 00:18:53.481 "trtype": "TCP", 00:18:53.481 "adrfam": "IPv4", 00:18:53.481 "traddr": "10.0.0.2", 00:18:53.481 "trsvcid": "4420" 00:18:53.481 }, 00:18:53.481 "peer_address": { 00:18:53.481 "trtype": "TCP", 00:18:53.481 "adrfam": "IPv4", 00:18:53.481 "traddr": "10.0.0.1", 00:18:53.481 "trsvcid": "33720" 00:18:53.481 }, 00:18:53.481 "auth": { 00:18:53.481 "state": "completed", 00:18:53.481 "digest": "sha384", 00:18:53.481 "dhgroup": "ffdhe8192" 00:18:53.481 } 00:18:53.481 } 00:18:53.481 ]' 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.481 12:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.739 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.303 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.560 12:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.817 00:18:55.073 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.073 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.073 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.073 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.073 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.074 { 00:18:55.074 "cntlid": 95, 00:18:55.074 "qid": 0, 00:18:55.074 "state": "enabled", 00:18:55.074 "listen_address": { 00:18:55.074 "trtype": "TCP", 00:18:55.074 "adrfam": "IPv4", 00:18:55.074 "traddr": "10.0.0.2", 00:18:55.074 "trsvcid": "4420" 00:18:55.074 }, 00:18:55.074 "peer_address": { 00:18:55.074 "trtype": "TCP", 00:18:55.074 "adrfam": "IPv4", 00:18:55.074 "traddr": "10.0.0.1", 00:18:55.074 "trsvcid": "55728" 00:18:55.074 }, 00:18:55.074 "auth": { 00:18:55.074 "state": "completed", 00:18:55.074 "digest": "sha384", 00:18:55.074 "dhgroup": "ffdhe8192" 00:18:55.074 } 00:18:55.074 } 00:18:55.074 ]' 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.074 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.330 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.330 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.330 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.330 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.330 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.331 12:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.895 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.152 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.409 00:18:56.409 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.409 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.409 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.667 { 00:18:56.667 "cntlid": 97, 00:18:56.667 "qid": 0, 00:18:56.667 "state": "enabled", 00:18:56.667 "listen_address": { 00:18:56.667 "trtype": "TCP", 00:18:56.667 "adrfam": "IPv4", 00:18:56.667 "traddr": "10.0.0.2", 00:18:56.667 "trsvcid": "4420" 00:18:56.667 }, 00:18:56.667 "peer_address": { 00:18:56.667 "trtype": "TCP", 00:18:56.667 "adrfam": "IPv4", 00:18:56.667 "traddr": "10.0.0.1", 00:18:56.667 "trsvcid": "55764" 00:18:56.667 }, 00:18:56.667 "auth": { 00:18:56.667 "state": "completed", 00:18:56.667 "digest": "sha512", 00:18:56.667 "dhgroup": "null" 00:18:56.667 } 00:18:56.667 } 00:18:56.667 ]' 00:18:56.667 12:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.667 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.925 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.490 12:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.748 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.748 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.005 { 00:18:58.005 "cntlid": 99, 00:18:58.005 "qid": 0, 00:18:58.005 "state": "enabled", 00:18:58.005 "listen_address": { 00:18:58.005 "trtype": "TCP", 00:18:58.005 "adrfam": "IPv4", 00:18:58.005 "traddr": "10.0.0.2", 00:18:58.005 "trsvcid": "4420" 00:18:58.005 }, 00:18:58.005 "peer_address": { 00:18:58.005 "trtype": "TCP", 00:18:58.005 "adrfam": "IPv4", 00:18:58.005 "traddr": "10.0.0.1", 00:18:58.005 "trsvcid": "55792" 00:18:58.005 }, 00:18:58.005 "auth": { 00:18:58.005 "state": "completed", 00:18:58.005 "digest": "sha512", 00:18:58.005 "dhgroup": "null" 00:18:58.005 } 00:18:58.005 } 00:18:58.005 ]' 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.005 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.262 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.262 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.262 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.262 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.262 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.262 12:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.828 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.086 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.344 00:18:59.344 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.344 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.344 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.601 { 00:18:59.601 "cntlid": 101, 00:18:59.601 "qid": 0, 00:18:59.601 "state": "enabled", 00:18:59.601 "listen_address": { 00:18:59.601 "trtype": "TCP", 00:18:59.601 "adrfam": "IPv4", 00:18:59.601 "traddr": "10.0.0.2", 00:18:59.601 "trsvcid": "4420" 00:18:59.601 }, 00:18:59.601 "peer_address": { 00:18:59.601 "trtype": "TCP", 00:18:59.601 "adrfam": "IPv4", 00:18:59.601 "traddr": "10.0.0.1", 00:18:59.601 "trsvcid": "55820" 00:18:59.601 }, 00:18:59.601 "auth": { 00:18:59.601 "state": "completed", 00:18:59.601 "digest": "sha512", 00:18:59.601 "dhgroup": "null" 00:18:59.601 } 00:18:59.601 } 00:18:59.601 ]' 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.601 12:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.601 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.601 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.601 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.859 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.423 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.681 12:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.681 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.938 { 00:19:00.938 "cntlid": 103, 00:19:00.938 "qid": 0, 00:19:00.938 "state": "enabled", 00:19:00.938 "listen_address": { 00:19:00.938 "trtype": "TCP", 00:19:00.938 "adrfam": "IPv4", 00:19:00.938 "traddr": "10.0.0.2", 00:19:00.938 "trsvcid": "4420" 00:19:00.938 }, 00:19:00.938 "peer_address": { 00:19:00.938 "trtype": "TCP", 00:19:00.938 "adrfam": "IPv4", 00:19:00.938 "traddr": "10.0.0.1", 00:19:00.938 "trsvcid": "55850" 00:19:00.938 }, 00:19:00.938 "auth": { 00:19:00.938 "state": "completed", 00:19:00.938 "digest": "sha512", 00:19:00.938 "dhgroup": "null" 00:19:00.938 } 00:19:00.938 } 00:19:00.938 ]' 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.938 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.196 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.196 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.196 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.196 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.196 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.454 12:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.018 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.019 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.275 00:19:02.275 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.275 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.275 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.533 { 00:19:02.533 "cntlid": 105, 00:19:02.533 "qid": 0, 00:19:02.533 "state": "enabled", 00:19:02.533 "listen_address": { 00:19:02.533 "trtype": "TCP", 00:19:02.533 "adrfam": "IPv4", 00:19:02.533 "traddr": "10.0.0.2", 00:19:02.533 "trsvcid": "4420" 00:19:02.533 }, 00:19:02.533 "peer_address": { 00:19:02.533 "trtype": "TCP", 00:19:02.533 "adrfam": "IPv4", 00:19:02.533 "traddr": "10.0.0.1", 00:19:02.533 "trsvcid": "55886" 00:19:02.533 }, 00:19:02.533 "auth": { 00:19:02.533 "state": "completed", 00:19:02.533 "digest": "sha512", 00:19:02.533 "dhgroup": "ffdhe2048" 00:19:02.533 } 00:19:02.533 } 00:19:02.533 ]' 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.533 12:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.791 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.356 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.613 12:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.871 00:19:03.871 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.871 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.871 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.129 { 00:19:04.129 "cntlid": 107, 00:19:04.129 "qid": 0, 00:19:04.129 "state": "enabled", 00:19:04.129 "listen_address": { 00:19:04.129 "trtype": "TCP", 00:19:04.129 "adrfam": "IPv4", 00:19:04.129 "traddr": "10.0.0.2", 00:19:04.129 "trsvcid": "4420" 00:19:04.129 }, 00:19:04.129 "peer_address": { 00:19:04.129 "trtype": "TCP", 00:19:04.129 "adrfam": "IPv4", 00:19:04.129 "traddr": "10.0.0.1", 00:19:04.129 "trsvcid": "55918" 00:19:04.129 }, 00:19:04.129 "auth": { 00:19:04.129 "state": "completed", 00:19:04.129 "digest": "sha512", 00:19:04.129 "dhgroup": "ffdhe2048" 00:19:04.129 } 00:19:04.129 } 00:19:04.129 ]' 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.129 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.386 12:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.953 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.211 00:19:05.211 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.211 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.211 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.468 { 00:19:05.468 "cntlid": 109, 00:19:05.468 "qid": 0, 00:19:05.468 "state": "enabled", 00:19:05.468 "listen_address": { 00:19:05.468 "trtype": "TCP", 00:19:05.468 "adrfam": "IPv4", 00:19:05.468 "traddr": "10.0.0.2", 00:19:05.468 "trsvcid": "4420" 00:19:05.468 }, 00:19:05.468 "peer_address": { 00:19:05.468 "trtype": "TCP", 00:19:05.468 "adrfam": "IPv4", 00:19:05.468 "traddr": "10.0.0.1", 00:19:05.468 "trsvcid": "38926" 00:19:05.468 }, 00:19:05.468 "auth": { 00:19:05.468 "state": "completed", 00:19:05.468 "digest": "sha512", 00:19:05.468 "dhgroup": "ffdhe2048" 00:19:05.468 } 00:19:05.468 } 00:19:05.468 ]' 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.468 12:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.724 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.289 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.547 12:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.805 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.805 { 00:19:06.805 "cntlid": 111, 00:19:06.805 "qid": 0, 00:19:06.805 "state": "enabled", 00:19:06.805 "listen_address": { 00:19:06.805 "trtype": "TCP", 00:19:06.805 "adrfam": "IPv4", 00:19:06.805 "traddr": "10.0.0.2", 00:19:06.805 "trsvcid": "4420" 00:19:06.805 }, 00:19:06.805 "peer_address": { 00:19:06.805 "trtype": "TCP", 00:19:06.805 "adrfam": "IPv4", 00:19:06.805 "traddr": "10.0.0.1", 00:19:06.805 "trsvcid": "38960" 00:19:06.805 }, 00:19:06.805 "auth": { 00:19:06.805 "state": "completed", 00:19:06.805 "digest": "sha512", 00:19:06.805 "dhgroup": "ffdhe2048" 00:19:06.805 } 00:19:06.805 } 00:19:06.805 ]' 00:19:06.805 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.063 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.321 12:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.886 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.887 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.145 00:19:08.145 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.145 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.145 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.404 { 00:19:08.404 "cntlid": 113, 00:19:08.404 "qid": 0, 00:19:08.404 "state": "enabled", 00:19:08.404 "listen_address": { 00:19:08.404 "trtype": "TCP", 00:19:08.404 "adrfam": "IPv4", 00:19:08.404 "traddr": "10.0.0.2", 00:19:08.404 "trsvcid": "4420" 00:19:08.404 }, 00:19:08.404 "peer_address": { 00:19:08.404 "trtype": "TCP", 00:19:08.404 "adrfam": "IPv4", 00:19:08.404 "traddr": "10.0.0.1", 00:19:08.404 "trsvcid": "38992" 00:19:08.404 }, 00:19:08.404 "auth": { 00:19:08.404 "state": "completed", 00:19:08.404 "digest": "sha512", 00:19:08.404 "dhgroup": "ffdhe3072" 00:19:08.404 } 00:19:08.404 } 00:19:08.404 ]' 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.404 12:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.662 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.227 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.486 12:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.486 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.744 { 00:19:09.744 "cntlid": 115, 00:19:09.744 "qid": 0, 00:19:09.744 "state": "enabled", 00:19:09.744 "listen_address": { 00:19:09.744 "trtype": "TCP", 00:19:09.744 "adrfam": "IPv4", 00:19:09.744 "traddr": "10.0.0.2", 00:19:09.744 "trsvcid": "4420" 00:19:09.744 }, 00:19:09.744 "peer_address": { 00:19:09.744 "trtype": "TCP", 00:19:09.744 "adrfam": "IPv4", 00:19:09.744 "traddr": "10.0.0.1", 00:19:09.744 "trsvcid": "39022" 00:19:09.744 }, 00:19:09.744 "auth": { 00:19:09.744 "state": "completed", 00:19:09.744 "digest": "sha512", 00:19:09.744 "dhgroup": "ffdhe3072" 00:19:09.744 } 00:19:09.744 } 00:19:09.744 ]' 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.744 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.001 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.001 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.001 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.001 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.001 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.002 12:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.566 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.825 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.170 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.171 { 00:19:11.171 "cntlid": 117, 00:19:11.171 "qid": 0, 00:19:11.171 "state": "enabled", 00:19:11.171 "listen_address": { 00:19:11.171 "trtype": "TCP", 00:19:11.171 "adrfam": "IPv4", 00:19:11.171 "traddr": "10.0.0.2", 00:19:11.171 "trsvcid": "4420" 00:19:11.171 }, 00:19:11.171 "peer_address": { 00:19:11.171 "trtype": "TCP", 00:19:11.171 "adrfam": "IPv4", 00:19:11.171 "traddr": "10.0.0.1", 00:19:11.171 "trsvcid": "39054" 00:19:11.171 }, 00:19:11.171 "auth": { 00:19:11.171 "state": "completed", 00:19:11.171 "digest": "sha512", 00:19:11.171 "dhgroup": "ffdhe3072" 00:19:11.171 } 00:19:11.171 } 00:19:11.171 ]' 00:19:11.171 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.429 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.686 12:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.251 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.509 00:19:12.509 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.509 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.509 12:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.775 { 00:19:12.775 "cntlid": 119, 00:19:12.775 "qid": 0, 00:19:12.775 "state": "enabled", 00:19:12.775 "listen_address": { 00:19:12.775 "trtype": "TCP", 00:19:12.775 "adrfam": "IPv4", 00:19:12.775 "traddr": "10.0.0.2", 00:19:12.775 "trsvcid": "4420" 00:19:12.775 }, 00:19:12.775 "peer_address": { 00:19:12.775 "trtype": "TCP", 00:19:12.775 "adrfam": "IPv4", 00:19:12.775 "traddr": "10.0.0.1", 00:19:12.775 "trsvcid": "39078" 00:19:12.775 }, 00:19:12.775 "auth": { 00:19:12.775 "state": "completed", 00:19:12.775 "digest": "sha512", 00:19:12.775 "dhgroup": "ffdhe3072" 00:19:12.775 } 00:19:12.775 } 00:19:12.775 ]' 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.775 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.034 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.599 12:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.599 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.857 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.857 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.115 { 00:19:14.115 "cntlid": 121, 00:19:14.115 "qid": 0, 00:19:14.115 "state": "enabled", 00:19:14.115 "listen_address": { 00:19:14.115 "trtype": "TCP", 00:19:14.115 "adrfam": "IPv4", 00:19:14.115 "traddr": "10.0.0.2", 00:19:14.115 "trsvcid": "4420" 00:19:14.115 }, 00:19:14.115 "peer_address": { 00:19:14.115 "trtype": "TCP", 00:19:14.115 "adrfam": "IPv4", 00:19:14.115 "traddr": "10.0.0.1", 00:19:14.115 "trsvcid": "39094" 00:19:14.115 }, 00:19:14.115 "auth": { 00:19:14.115 "state": "completed", 00:19:14.115 "digest": "sha512", 00:19:14.115 "dhgroup": "ffdhe4096" 00:19:14.115 } 00:19:14.115 } 00:19:14.115 ]' 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.115 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.373 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.373 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.373 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.373 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.373 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.373 12:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.938 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.196 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.454 00:19:15.454 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.454 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.454 12:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.712 { 00:19:15.712 "cntlid": 123, 00:19:15.712 "qid": 0, 00:19:15.712 "state": "enabled", 00:19:15.712 "listen_address": { 00:19:15.712 "trtype": "TCP", 00:19:15.712 "adrfam": "IPv4", 00:19:15.712 "traddr": "10.0.0.2", 00:19:15.712 "trsvcid": "4420" 00:19:15.712 }, 00:19:15.712 "peer_address": { 00:19:15.712 "trtype": "TCP", 00:19:15.712 "adrfam": "IPv4", 00:19:15.712 "traddr": "10.0.0.1", 00:19:15.712 "trsvcid": "48130" 00:19:15.712 }, 00:19:15.712 "auth": { 00:19:15.712 "state": "completed", 00:19:15.712 "digest": "sha512", 00:19:15.712 "dhgroup": "ffdhe4096" 00:19:15.712 } 00:19:15.712 } 00:19:15.712 ]' 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.712 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.970 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.535 12:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.793 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.051 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.051 12:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.309 { 00:19:17.309 "cntlid": 125, 00:19:17.309 "qid": 0, 00:19:17.309 "state": "enabled", 00:19:17.309 "listen_address": { 00:19:17.309 "trtype": "TCP", 00:19:17.309 "adrfam": "IPv4", 00:19:17.309 "traddr": "10.0.0.2", 00:19:17.309 "trsvcid": "4420" 00:19:17.309 }, 00:19:17.309 "peer_address": { 00:19:17.309 "trtype": "TCP", 00:19:17.309 "adrfam": "IPv4", 00:19:17.309 "traddr": "10.0.0.1", 00:19:17.309 "trsvcid": "48162" 00:19:17.309 }, 00:19:17.309 "auth": { 00:19:17.309 "state": "completed", 00:19:17.309 "digest": "sha512", 00:19:17.309 "dhgroup": "ffdhe4096" 00:19:17.309 } 00:19:17.309 } 00:19:17.309 ]' 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.309 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.567 12:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.134 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.392 00:19:18.392 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.392 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.392 12:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.650 { 00:19:18.650 "cntlid": 127, 00:19:18.650 "qid": 0, 00:19:18.650 "state": "enabled", 00:19:18.650 "listen_address": { 00:19:18.650 "trtype": "TCP", 00:19:18.650 "adrfam": "IPv4", 00:19:18.650 "traddr": "10.0.0.2", 00:19:18.650 "trsvcid": "4420" 00:19:18.650 }, 00:19:18.650 "peer_address": { 00:19:18.650 "trtype": "TCP", 00:19:18.650 "adrfam": "IPv4", 00:19:18.650 "traddr": "10.0.0.1", 00:19:18.650 "trsvcid": "48190" 00:19:18.650 }, 00:19:18.650 "auth": { 00:19:18.650 "state": "completed", 00:19:18.650 "digest": "sha512", 00:19:18.650 "dhgroup": "ffdhe4096" 00:19:18.650 } 00:19:18.650 } 00:19:18.650 ]' 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.650 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.908 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.908 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.908 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.908 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.474 12:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.731 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.988 00:19:19.988 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.988 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.988 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.246 { 00:19:20.246 "cntlid": 129, 00:19:20.246 "qid": 0, 00:19:20.246 "state": "enabled", 00:19:20.246 "listen_address": { 00:19:20.246 "trtype": "TCP", 00:19:20.246 "adrfam": "IPv4", 00:19:20.246 "traddr": "10.0.0.2", 00:19:20.246 "trsvcid": "4420" 00:19:20.246 }, 00:19:20.246 "peer_address": { 00:19:20.246 "trtype": "TCP", 00:19:20.246 "adrfam": "IPv4", 00:19:20.246 "traddr": "10.0.0.1", 00:19:20.246 "trsvcid": "48222" 00:19:20.246 }, 00:19:20.246 "auth": { 00:19:20.246 "state": "completed", 00:19:20.246 "digest": "sha512", 00:19:20.246 "dhgroup": "ffdhe6144" 00:19:20.246 } 00:19:20.246 } 00:19:20.246 ]' 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.246 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.503 12:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.067 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.324 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.581 00:19:21.581 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.581 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.581 12:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.838 { 00:19:21.838 "cntlid": 131, 00:19:21.838 "qid": 0, 00:19:21.838 "state": "enabled", 00:19:21.838 "listen_address": { 00:19:21.838 "trtype": "TCP", 00:19:21.838 "adrfam": "IPv4", 00:19:21.838 "traddr": "10.0.0.2", 00:19:21.838 "trsvcid": "4420" 00:19:21.838 }, 00:19:21.838 "peer_address": { 00:19:21.838 "trtype": "TCP", 00:19:21.838 "adrfam": "IPv4", 00:19:21.838 "traddr": "10.0.0.1", 00:19:21.838 "trsvcid": "48258" 00:19:21.838 }, 00:19:21.838 "auth": { 00:19:21.838 "state": "completed", 00:19:21.838 "digest": "sha512", 00:19:21.838 "dhgroup": "ffdhe6144" 00:19:21.838 } 00:19:21.838 } 00:19:21.838 ]' 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.838 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.839 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.839 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.839 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.839 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.095 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:19:22.660 12:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.660 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.916 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.916 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.916 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.173 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.173 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.430 { 00:19:23.430 "cntlid": 133, 00:19:23.430 "qid": 0, 00:19:23.430 "state": "enabled", 00:19:23.430 "listen_address": { 00:19:23.430 "trtype": "TCP", 00:19:23.430 "adrfam": "IPv4", 00:19:23.430 "traddr": "10.0.0.2", 00:19:23.430 "trsvcid": "4420" 00:19:23.430 }, 00:19:23.430 "peer_address": { 00:19:23.430 "trtype": "TCP", 00:19:23.430 "adrfam": "IPv4", 00:19:23.430 "traddr": "10.0.0.1", 00:19:23.430 "trsvcid": "48270" 00:19:23.430 }, 00:19:23.430 "auth": { 00:19:23.430 "state": "completed", 00:19:23.430 "digest": "sha512", 00:19:23.430 "dhgroup": "ffdhe6144" 00:19:23.430 } 00:19:23.430 } 00:19:23.430 ]' 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.430 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.688 12:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.254 12:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.820 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.820 { 00:19:24.820 "cntlid": 135, 00:19:24.820 "qid": 0, 00:19:24.820 "state": "enabled", 00:19:24.820 "listen_address": { 00:19:24.820 "trtype": "TCP", 00:19:24.820 "adrfam": "IPv4", 00:19:24.820 "traddr": "10.0.0.2", 00:19:24.820 "trsvcid": "4420" 00:19:24.820 }, 00:19:24.820 "peer_address": { 00:19:24.820 "trtype": "TCP", 00:19:24.820 "adrfam": "IPv4", 00:19:24.820 "traddr": "10.0.0.1", 00:19:24.820 "trsvcid": "41550" 00:19:24.820 }, 00:19:24.820 "auth": { 00:19:24.820 "state": "completed", 00:19:24.820 "digest": "sha512", 00:19:24.820 "dhgroup": "ffdhe6144" 00:19:24.820 } 00:19:24.820 } 00:19:24.820 ]' 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.820 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.078 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.078 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.078 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.078 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.078 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.078 12:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.643 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.902 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.467 00:19:26.467 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.467 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.467 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.724 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.724 12:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.724 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.724 12:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.724 12:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.724 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.724 { 00:19:26.724 "cntlid": 137, 00:19:26.724 "qid": 0, 00:19:26.724 "state": "enabled", 00:19:26.724 "listen_address": { 00:19:26.724 "trtype": "TCP", 00:19:26.724 "adrfam": "IPv4", 00:19:26.724 "traddr": "10.0.0.2", 00:19:26.724 "trsvcid": "4420" 00:19:26.724 }, 00:19:26.724 "peer_address": { 00:19:26.724 "trtype": "TCP", 00:19:26.724 "adrfam": "IPv4", 00:19:26.724 "traddr": "10.0.0.1", 00:19:26.725 "trsvcid": "41574" 00:19:26.725 }, 00:19:26.725 "auth": { 00:19:26.725 "state": "completed", 00:19:26.725 "digest": "sha512", 00:19:26.725 "dhgroup": "ffdhe8192" 00:19:26.725 } 00:19:26.725 } 00:19:26.725 ]' 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.725 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.981 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.547 12:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.805 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.062 00:19:28.062 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.062 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.062 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.320 { 00:19:28.320 "cntlid": 139, 00:19:28.320 "qid": 0, 00:19:28.320 "state": "enabled", 00:19:28.320 "listen_address": { 00:19:28.320 "trtype": "TCP", 00:19:28.320 "adrfam": "IPv4", 00:19:28.320 "traddr": "10.0.0.2", 00:19:28.320 "trsvcid": "4420" 00:19:28.320 }, 00:19:28.320 "peer_address": { 00:19:28.320 "trtype": "TCP", 00:19:28.320 "adrfam": "IPv4", 00:19:28.320 "traddr": "10.0.0.1", 00:19:28.320 "trsvcid": "41594" 00:19:28.320 }, 00:19:28.320 "auth": { 00:19:28.320 "state": "completed", 00:19:28.320 "digest": "sha512", 00:19:28.320 "dhgroup": "ffdhe8192" 00:19:28.320 } 00:19:28.320 } 00:19:28.320 ]' 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.320 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.578 12:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MTM0MDY3ZDQ3NjEyNWQzYWZkMTVjNTk2NzEzMjRhMGXaG6FM: --dhchap-ctrl-secret DHHC-1:02:ZDY4ZDBkOWNjNmJmNDlkYjE2YzBlZDQ3ZTI1MjEwNjg0NWEyMTFiNDA2MjRlZTkxbMBjPA==: 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.143 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.401 12:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.659 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.916 { 00:19:29.916 "cntlid": 141, 00:19:29.916 "qid": 0, 00:19:29.916 "state": "enabled", 00:19:29.916 "listen_address": { 00:19:29.916 "trtype": "TCP", 00:19:29.916 "adrfam": "IPv4", 00:19:29.916 "traddr": "10.0.0.2", 00:19:29.916 "trsvcid": "4420" 00:19:29.916 }, 00:19:29.916 "peer_address": { 00:19:29.916 "trtype": "TCP", 00:19:29.916 "adrfam": "IPv4", 00:19:29.916 "traddr": "10.0.0.1", 00:19:29.916 "trsvcid": "41610" 00:19:29.916 }, 00:19:29.916 "auth": { 00:19:29.916 "state": "completed", 00:19:29.916 "digest": "sha512", 00:19:29.916 "dhgroup": "ffdhe8192" 00:19:29.916 } 00:19:29.916 } 00:19:29.916 ]' 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.916 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.917 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.174 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.174 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.174 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.174 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.174 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.174 12:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YTI3YmJjNGUxMzk0MzE0ZDNkNTk4OGYyMjUyNjUzMGI2MTllMTY2NGUwZWEyMDc1ga2svA==: --dhchap-ctrl-secret DHHC-1:01:NTNmZGViYWZiZGVhYjk3Zjc5YzIyYmIwYjNhOGY2YTFVhIJL: 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.740 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.998 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.565 00:19:31.565 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.565 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.565 12:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.565 { 00:19:31.565 "cntlid": 143, 00:19:31.565 "qid": 0, 00:19:31.565 "state": "enabled", 00:19:31.565 "listen_address": { 00:19:31.565 "trtype": "TCP", 00:19:31.565 "adrfam": "IPv4", 00:19:31.565 "traddr": "10.0.0.2", 00:19:31.565 "trsvcid": "4420" 00:19:31.565 }, 00:19:31.565 "peer_address": { 00:19:31.565 "trtype": "TCP", 00:19:31.565 "adrfam": "IPv4", 00:19:31.565 "traddr": "10.0.0.1", 00:19:31.565 "trsvcid": "41648" 00:19:31.565 }, 00:19:31.565 "auth": { 00:19:31.565 "state": "completed", 00:19:31.565 "digest": "sha512", 00:19:31.565 "dhgroup": "ffdhe8192" 00:19:31.565 } 00:19:31.565 } 00:19:31.565 ]' 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.565 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.822 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.822 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.822 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.822 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.822 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.822 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:32.390 12:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:32.679 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:32.679 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.679 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.679 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.679 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.680 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.269 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.269 { 00:19:33.269 "cntlid": 145, 00:19:33.269 "qid": 0, 00:19:33.269 "state": "enabled", 00:19:33.269 "listen_address": { 00:19:33.269 "trtype": "TCP", 00:19:33.269 "adrfam": "IPv4", 00:19:33.269 "traddr": "10.0.0.2", 00:19:33.269 "trsvcid": "4420" 00:19:33.269 }, 00:19:33.269 "peer_address": { 00:19:33.269 "trtype": "TCP", 00:19:33.269 "adrfam": "IPv4", 00:19:33.269 "traddr": "10.0.0.1", 00:19:33.269 "trsvcid": "41664" 00:19:33.269 }, 00:19:33.269 "auth": { 00:19:33.269 "state": "completed", 00:19:33.269 "digest": "sha512", 00:19:33.269 "dhgroup": "ffdhe8192" 00:19:33.269 } 00:19:33.269 } 00:19:33.269 ]' 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.269 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.527 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.527 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.527 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.527 12:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MTZhMmJlMjRiMjgwMDk0OGRkNWEwY2ZhNjg4OTAwOWIxYzg0NjM4Njc4MjdkOTkwDJuT6A==: --dhchap-ctrl-secret DHHC-1:03:YjIxNGYwODk5ZmFlNzliZDc1OGU0Y2IyYmZmN2I4MjViZmMwNDUwZDBkOWNiZDlmZGM0YjhiZmI1MzE3OGYxZnEWiY8=: 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.093 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:34.658 request: 00:19:34.658 { 00:19:34.658 "name": "nvme0", 00:19:34.658 "trtype": "tcp", 00:19:34.658 "traddr": "10.0.0.2", 00:19:34.658 "adrfam": "ipv4", 00:19:34.658 "trsvcid": "4420", 00:19:34.658 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:34.658 "prchk_reftag": false, 00:19:34.658 "prchk_guard": false, 00:19:34.658 "hdgst": false, 00:19:34.658 "ddgst": false, 00:19:34.658 "dhchap_key": "key2", 00:19:34.658 "method": "bdev_nvme_attach_controller", 00:19:34.658 "req_id": 1 00:19:34.658 } 00:19:34.658 Got JSON-RPC error response 00:19:34.658 response: 00:19:34.658 { 00:19:34.658 "code": -5, 00:19:34.658 "message": "Input/output error" 00:19:34.658 } 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.658 12:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:34.915 request: 00:19:34.915 { 00:19:34.915 "name": "nvme0", 00:19:34.915 "trtype": "tcp", 00:19:34.915 "traddr": "10.0.0.2", 00:19:34.915 "adrfam": "ipv4", 00:19:34.915 "trsvcid": "4420", 00:19:34.915 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:34.915 "prchk_reftag": false, 00:19:34.915 "prchk_guard": false, 00:19:34.915 "hdgst": false, 00:19:34.915 "ddgst": false, 00:19:34.915 "dhchap_key": "key1", 00:19:34.915 "dhchap_ctrlr_key": "ckey2", 00:19:34.915 "method": "bdev_nvme_attach_controller", 00:19:34.915 "req_id": 1 00:19:34.915 } 00:19:34.915 Got JSON-RPC error response 00:19:34.915 response: 00:19:34.915 { 00:19:34.915 "code": -5, 00:19:34.915 "message": "Input/output error" 00:19:34.915 } 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.915 12:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.480 request: 00:19:35.480 { 00:19:35.480 "name": "nvme0", 00:19:35.480 "trtype": "tcp", 00:19:35.480 "traddr": "10.0.0.2", 00:19:35.480 "adrfam": "ipv4", 00:19:35.480 "trsvcid": "4420", 00:19:35.480 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:35.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:35.480 "prchk_reftag": false, 00:19:35.480 "prchk_guard": false, 00:19:35.480 "hdgst": false, 00:19:35.480 "ddgst": false, 00:19:35.480 "dhchap_key": "key1", 00:19:35.480 "dhchap_ctrlr_key": "ckey1", 00:19:35.480 "method": "bdev_nvme_attach_controller", 00:19:35.480 "req_id": 1 00:19:35.480 } 00:19:35.480 Got JSON-RPC error response 00:19:35.480 response: 00:19:35.480 { 00:19:35.480 "code": -5, 00:19:35.480 "message": "Input/output error" 00:19:35.480 } 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2214526 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2214526 ']' 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2214526 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2214526 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2214526' 00:19:35.480 killing process with pid 2214526 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2214526 00:19:35.480 12:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2214526 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2235223 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2235223 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2235223 ']' 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:35.738 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2235223 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2235223 ']' 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:36.671 12:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.671 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:36.671 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:36.671 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:36.671 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.671 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.929 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.930 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.930 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.187 00:19:37.187 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.187 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.187 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.445 { 00:19:37.445 "cntlid": 1, 00:19:37.445 "qid": 0, 00:19:37.445 "state": "enabled", 00:19:37.445 "listen_address": { 00:19:37.445 "trtype": "TCP", 00:19:37.445 "adrfam": "IPv4", 00:19:37.445 "traddr": "10.0.0.2", 00:19:37.445 "trsvcid": "4420" 00:19:37.445 }, 00:19:37.445 "peer_address": { 00:19:37.445 "trtype": "TCP", 00:19:37.445 "adrfam": "IPv4", 00:19:37.445 "traddr": "10.0.0.1", 00:19:37.445 "trsvcid": "59208" 00:19:37.445 }, 00:19:37.445 "auth": { 00:19:37.445 "state": "completed", 00:19:37.445 "digest": "sha512", 00:19:37.445 "dhgroup": "ffdhe8192" 00:19:37.445 } 00:19:37.445 } 00:19:37.445 ]' 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.445 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.703 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.703 12:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.703 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.703 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.703 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.703 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YjBhZDZkZTIxYjFmMmY1NzYyNDIzMTE0MzYxYjFjODAyY2Y5ODkzMzkzNDg4NDI2ZWMwMzAyN2RmZWQ1NDMwYlNELfg=: 00:19:38.268 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:38.269 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.527 12:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.785 request: 00:19:38.785 { 00:19:38.785 "name": "nvme0", 00:19:38.785 "trtype": "tcp", 00:19:38.785 "traddr": "10.0.0.2", 00:19:38.785 "adrfam": "ipv4", 00:19:38.785 "trsvcid": "4420", 00:19:38.785 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:38.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:38.785 "prchk_reftag": false, 00:19:38.785 "prchk_guard": false, 00:19:38.785 "hdgst": false, 00:19:38.785 "ddgst": false, 00:19:38.785 "dhchap_key": "key3", 00:19:38.785 "method": "bdev_nvme_attach_controller", 00:19:38.785 "req_id": 1 00:19:38.785 } 00:19:38.785 Got JSON-RPC error response 00:19:38.785 response: 00:19:38.785 { 00:19:38.785 "code": -5, 00:19:38.785 "message": "Input/output error" 00:19:38.785 } 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.785 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.043 request: 00:19:39.043 { 00:19:39.043 "name": "nvme0", 00:19:39.043 "trtype": "tcp", 00:19:39.043 "traddr": "10.0.0.2", 00:19:39.043 "adrfam": "ipv4", 00:19:39.043 "trsvcid": "4420", 00:19:39.043 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:39.043 "prchk_reftag": false, 00:19:39.043 "prchk_guard": false, 00:19:39.043 "hdgst": false, 00:19:39.043 "ddgst": false, 00:19:39.043 "dhchap_key": "key3", 00:19:39.043 "method": "bdev_nvme_attach_controller", 00:19:39.043 "req_id": 1 00:19:39.043 } 00:19:39.043 Got JSON-RPC error response 00:19:39.043 response: 00:19:39.043 { 00:19:39.043 "code": -5, 00:19:39.043 "message": "Input/output error" 00:19:39.043 } 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.043 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.301 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.301 request: 00:19:39.301 { 00:19:39.301 "name": "nvme0", 00:19:39.301 "trtype": "tcp", 00:19:39.301 "traddr": "10.0.0.2", 00:19:39.301 "adrfam": "ipv4", 00:19:39.301 "trsvcid": "4420", 00:19:39.301 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:39.301 "prchk_reftag": false, 00:19:39.301 "prchk_guard": false, 00:19:39.301 "hdgst": false, 00:19:39.301 "ddgst": false, 00:19:39.301 "dhchap_key": "key0", 00:19:39.301 "dhchap_ctrlr_key": "key1", 00:19:39.301 "method": "bdev_nvme_attach_controller", 00:19:39.301 "req_id": 1 00:19:39.301 } 00:19:39.301 Got JSON-RPC error response 00:19:39.301 response: 00:19:39.301 { 00:19:39.301 "code": -5, 00:19:39.301 "message": "Input/output error" 00:19:39.301 } 00:19:39.558 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:39.558 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:39.558 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:39.558 12:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:39.558 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.558 12:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:39.558 00:19:39.558 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:39.558 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.558 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:39.815 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.815 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.815 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2214701 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2214701 ']' 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2214701 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2214701 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2214701' 00:19:40.072 killing process with pid 2214701 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2214701 00:19:40.072 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2214701 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.329 rmmod nvme_tcp 00:19:40.329 rmmod nvme_fabrics 00:19:40.329 rmmod nvme_keyring 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2235223 ']' 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2235223 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2235223 ']' 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2235223 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:40.329 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2235223 00:19:40.587 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:40.587 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:40.587 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2235223' 00:19:40.587 killing process with pid 2235223 00:19:40.587 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2235223 00:19:40.587 12:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2235223 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.587 12:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.114 12:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.115 12:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.lVL /tmp/spdk.key-sha256.rKn /tmp/spdk.key-sha384.z8u /tmp/spdk.key-sha512.Kbb /tmp/spdk.key-sha512.aCP /tmp/spdk.key-sha384.MUX /tmp/spdk.key-sha256.Ebw '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:43.115 00:19:43.115 real 2m10.187s 00:19:43.115 user 4m48.637s 00:19:43.115 sys 0m28.803s 00:19:43.115 12:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:43.115 12:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.115 ************************************ 00:19:43.115 END TEST nvmf_auth_target 00:19:43.115 ************************************ 00:19:43.115 12:08:32 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:43.115 12:08:32 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:43.115 12:08:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:43.115 12:08:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:43.115 12:08:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.115 ************************************ 00:19:43.115 START TEST nvmf_bdevio_no_huge 00:19:43.115 ************************************ 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:43.115 * Looking for test storage... 00:19:43.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.115 12:08:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:49.668 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:49.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.668 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:49.668 Found net devices under 0000:af:00.0: cvl_0_0 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:49.669 Found net devices under 0000:af:00.1: cvl_0_1 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:49.669 00:19:49.669 --- 10.0.0.2 ping statistics --- 00:19:49.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.669 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:19:49.669 00:19:49.669 --- 10.0.0.1 ping statistics --- 00:19:49.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.669 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2239736 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2239736 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 2239736 ']' 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:49.669 12:08:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:49.669 [2024-06-10 12:08:38.938872] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:19:49.669 [2024-06-10 12:08:38.938930] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:49.669 [2024-06-10 12:08:39.019387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.669 [2024-06-10 12:08:39.117120] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.669 [2024-06-10 12:08:39.117154] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.669 [2024-06-10 12:08:39.117163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.669 [2024-06-10 12:08:39.117171] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.669 [2024-06-10 12:08:39.117179] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.669 [2024-06-10 12:08:39.117299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:19:49.669 [2024-06-10 12:08:39.117423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:19:49.669 [2024-06-10 12:08:39.117523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.669 [2024-06-10 12:08:39.117524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:19:50.232 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:50.232 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:19:50.232 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.232 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:50.232 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 [2024-06-10 12:08:39.788996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 Malloc0 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.490 [2024-06-10 12:08:39.833630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.490 { 00:19:50.490 "params": { 00:19:50.490 "name": "Nvme$subsystem", 00:19:50.490 "trtype": "$TEST_TRANSPORT", 00:19:50.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.490 "adrfam": "ipv4", 00:19:50.490 "trsvcid": "$NVMF_PORT", 00:19:50.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.490 "hdgst": ${hdgst:-false}, 00:19:50.490 "ddgst": ${ddgst:-false} 00:19:50.490 }, 00:19:50.490 "method": "bdev_nvme_attach_controller" 00:19:50.490 } 00:19:50.490 EOF 00:19:50.490 )") 00:19:50.490 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:50.491 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:50.491 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:50.491 12:08:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:50.491 "params": { 00:19:50.491 "name": "Nvme1", 00:19:50.491 "trtype": "tcp", 00:19:50.491 "traddr": "10.0.0.2", 00:19:50.491 "adrfam": "ipv4", 00:19:50.491 "trsvcid": "4420", 00:19:50.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.491 "hdgst": false, 00:19:50.491 "ddgst": false 00:19:50.491 }, 00:19:50.491 "method": "bdev_nvme_attach_controller" 00:19:50.491 }' 00:19:50.491 [2024-06-10 12:08:39.886139] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:19:50.491 [2024-06-10 12:08:39.886191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2239789 ] 00:19:50.491 [2024-06-10 12:08:39.960466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:50.748 [2024-06-10 12:08:40.063433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.748 [2024-06-10 12:08:40.063531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.748 [2024-06-10 12:08:40.063534] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.006 I/O targets: 00:19:51.006 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:51.006 00:19:51.006 00:19:51.006 CUnit - A unit testing framework for C - Version 2.1-3 00:19:51.006 http://cunit.sourceforge.net/ 00:19:51.006 00:19:51.006 00:19:51.006 Suite: bdevio tests on: Nvme1n1 00:19:51.006 Test: blockdev write read block ...passed 00:19:51.006 Test: blockdev write zeroes read block ...passed 00:19:51.006 Test: blockdev write zeroes read no split ...passed 00:19:51.006 Test: blockdev write zeroes read split ...passed 00:19:51.006 Test: blockdev write zeroes read split partial ...passed 00:19:51.006 Test: blockdev reset ...[2024-06-10 12:08:40.505723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.006 [2024-06-10 12:08:40.505788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21545a0 (9): Bad file descriptor 00:19:51.263 [2024-06-10 12:08:40.561702] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.263 passed 00:19:51.263 Test: blockdev write read 8 blocks ...passed 00:19:51.264 Test: blockdev write read size > 128k ...passed 00:19:51.264 Test: blockdev write read invalid size ...passed 00:19:51.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:51.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:51.264 Test: blockdev write read max offset ...passed 00:19:51.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:51.264 Test: blockdev writev readv 8 blocks ...passed 00:19:51.264 Test: blockdev writev readv 30 x 1block ...passed 00:19:51.264 Test: blockdev writev readv block ...passed 00:19:51.264 Test: blockdev writev readv size > 128k ...passed 00:19:51.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:51.264 Test: blockdev comparev and writev ...[2024-06-10 12:08:40.777369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.777397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.777413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.777424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.777669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.777682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.777699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.777709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.777943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.777967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.777977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.778222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.778234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:51.264 [2024-06-10 12:08:40.778247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:51.264 [2024-06-10 12:08:40.778256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:51.522 passed 00:19:51.522 Test: blockdev nvme passthru rw ...passed 00:19:51.522 Test: blockdev nvme passthru vendor specific ...[2024-06-10 12:08:40.860842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.522 [2024-06-10 12:08:40.860860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.522 [2024-06-10 12:08:40.860984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.522 [2024-06-10 12:08:40.860996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:51.522 [2024-06-10 12:08:40.861118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.522 [2024-06-10 12:08:40.861129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:51.522 [2024-06-10 12:08:40.861247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:51.522 [2024-06-10 12:08:40.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:51.522 passed 00:19:51.522 Test: blockdev nvme admin passthru ...passed 00:19:51.522 Test: blockdev copy ...passed 00:19:51.522 00:19:51.522 Run Summary: Type Total Ran Passed Failed Inactive 00:19:51.522 suites 1 1 n/a 0 0 00:19:51.522 tests 23 23 23 0 0 00:19:51.522 asserts 152 152 152 0 n/a 00:19:51.522 00:19:51.522 Elapsed time = 1.232 seconds 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.781 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.781 rmmod nvme_tcp 00:19:51.781 rmmod nvme_fabrics 00:19:51.781 rmmod nvme_keyring 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2239736 ']' 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2239736 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 2239736 ']' 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 2239736 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2239736 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2239736' 00:19:52.040 killing process with pid 2239736 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 2239736 00:19:52.040 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 2239736 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.298 12:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.832 12:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.832 00:19:54.832 real 0m11.598s 00:19:54.832 user 0m14.199s 00:19:54.832 sys 0m6.158s 00:19:54.832 12:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:54.832 12:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.832 ************************************ 00:19:54.832 END TEST nvmf_bdevio_no_huge 00:19:54.832 ************************************ 00:19:54.832 12:08:43 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:54.832 12:08:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:54.832 12:08:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:54.832 12:08:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:54.832 ************************************ 00:19:54.832 START TEST nvmf_tls 00:19:54.832 ************************************ 00:19:54.832 12:08:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:54.832 * Looking for test storage... 00:19:54.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.832 12:08:44 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.833 12:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:01.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:01.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:01.398 Found net devices under 0000:af:00.0: cvl_0_0 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:01.398 Found net devices under 0000:af:00.1: cvl_0_1 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.398 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:20:01.398 00:20:01.398 --- 10.0.0.2 ping statistics --- 00:20:01.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.398 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:01.399 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:01.657 00:20:01.657 --- 10.0.0.1 ping statistics --- 00:20:01.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.657 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2243749 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2243749 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2243749 ']' 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:01.657 12:08:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.657 [2024-06-10 12:08:51.013465] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:01.657 [2024-06-10 12:08:51.013515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.657 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.657 [2024-06-10 12:08:51.088822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.657 [2024-06-10 12:08:51.161401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.657 [2024-06-10 12:08:51.161441] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.657 [2024-06-10 12:08:51.161451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.658 [2024-06-10 12:08:51.161460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.658 [2024-06-10 12:08:51.161467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.658 [2024-06-10 12:08:51.161495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:02.592 12:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:02.592 true 00:20:02.592 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.592 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:02.850 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:02.850 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:02.850 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:02.850 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.850 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:03.107 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:03.107 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:03.107 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:03.365 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.365 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:03.365 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:03.365 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:03.366 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.366 12:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:03.625 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:03.625 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:03.625 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:03.933 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:03.933 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:03.933 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:03.933 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:03.933 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:04.203 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:04.203 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.MLTEnqm5E5 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.pWYXcjeDBU 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.MLTEnqm5E5 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pWYXcjeDBU 00:20:04.461 12:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:04.718 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:04.718 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.MLTEnqm5E5 00:20:04.718 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MLTEnqm5E5 00:20:04.718 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.975 [2024-06-10 12:08:54.394467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.975 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:05.232 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:05.233 [2024-06-10 12:08:54.751374] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.233 [2024-06-10 12:08:54.751597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.490 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.490 malloc0 00:20:05.490 12:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.747 12:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MLTEnqm5E5 00:20:05.747 [2024-06-10 12:08:55.253022] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:06.004 12:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MLTEnqm5E5 00:20:06.004 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.964 Initializing NVMe Controllers 00:20:15.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:15.964 Initialization complete. Launching workers. 00:20:15.964 ======================================================== 00:20:15.964 Latency(us) 00:20:15.964 Device Information : IOPS MiB/s Average min max 00:20:15.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16483.20 64.39 3883.15 795.27 6753.76 00:20:15.964 ======================================================== 00:20:15.965 Total : 16483.20 64.39 3883.15 795.27 6753.76 00:20:15.965 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MLTEnqm5E5 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MLTEnqm5E5' 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2246769 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2246769 /var/tmp/bdevperf.sock 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2246769 ']' 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:15.965 12:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.965 [2024-06-10 12:09:05.425374] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:15.965 [2024-06-10 12:09:05.425428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246769 ] 00:20:15.965 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.222 [2024-06-10 12:09:05.492778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.222 [2024-06-10 12:09:05.563210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.784 12:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:16.784 12:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:16.784 12:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MLTEnqm5E5 00:20:17.040 [2024-06-10 12:09:06.369174] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.040 [2024-06-10 12:09:06.369253] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:17.040 TLSTESTn1 00:20:17.040 12:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:17.040 Running I/O for 10 seconds... 00:20:29.230 00:20:29.230 Latency(us) 00:20:29.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.230 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.230 Verification LBA range: start 0x0 length 0x2000 00:20:29.230 TLSTESTn1 : 10.01 5621.17 21.96 0.00 0.00 22737.99 5531.24 33764.15 00:20:29.230 =================================================================================================================== 00:20:29.230 Total : 5621.17 21.96 0.00 0.00 22737.99 5531.24 33764.15 00:20:29.230 0 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2246769 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2246769 ']' 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2246769 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2246769 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2246769' 00:20:29.230 killing process with pid 2246769 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2246769 00:20:29.230 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.230 00:20:29.230 Latency(us) 00:20:29.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.230 =================================================================================================================== 00:20:29.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.230 [2024-06-10 12:09:16.645205] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2246769 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pWYXcjeDBU 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pWYXcjeDBU 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pWYXcjeDBU 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pWYXcjeDBU' 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2248805 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2248805 /var/tmp/bdevperf.sock 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2248805 ']' 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:29.230 12:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.230 [2024-06-10 12:09:16.876973] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:29.230 [2024-06-10 12:09:16.877026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248805 ] 00:20:29.230 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.230 [2024-06-10 12:09:16.944341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.230 [2024-06-10 12:09:17.013046] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.230 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:29.230 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:29.230 12:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pWYXcjeDBU 00:20:29.230 [2024-06-10 12:09:17.826481] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.230 [2024-06-10 12:09:17.826569] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.230 [2024-06-10 12:09:17.835296] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:29.230 [2024-06-10 12:09:17.835776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1518420 (107): Transport endpoint is not connected 00:20:29.230 [2024-06-10 12:09:17.836768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1518420 (9): Bad file descriptor 00:20:29.230 [2024-06-10 12:09:17.837769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:29.230 [2024-06-10 12:09:17.837780] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:29.230 [2024-06-10 12:09:17.837794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:29.230 request: 00:20:29.230 { 00:20:29.230 "name": "TLSTEST", 00:20:29.230 "trtype": "tcp", 00:20:29.231 "traddr": "10.0.0.2", 00:20:29.231 "adrfam": "ipv4", 00:20:29.231 "trsvcid": "4420", 00:20:29.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.231 "prchk_reftag": false, 00:20:29.231 "prchk_guard": false, 00:20:29.231 "hdgst": false, 00:20:29.231 "ddgst": false, 00:20:29.231 "psk": "/tmp/tmp.pWYXcjeDBU", 00:20:29.231 "method": "bdev_nvme_attach_controller", 00:20:29.231 "req_id": 1 00:20:29.231 } 00:20:29.231 Got JSON-RPC error response 00:20:29.231 response: 00:20:29.231 { 00:20:29.231 "code": -5, 00:20:29.231 "message": "Input/output error" 00:20:29.231 } 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2248805 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2248805 ']' 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2248805 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2248805 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2248805' 00:20:29.231 killing process with pid 2248805 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2248805 00:20:29.231 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.231 00:20:29.231 Latency(us) 00:20:29.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.231 =================================================================================================================== 00:20:29.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.231 [2024-06-10 12:09:17.912689] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.231 12:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2248805 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MLTEnqm5E5 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MLTEnqm5E5 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MLTEnqm5E5 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MLTEnqm5E5' 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2248940 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2248940 /var/tmp/bdevperf.sock 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2248940 ']' 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:29.231 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.231 [2024-06-10 12:09:18.137487] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:29.231 [2024-06-10 12:09:18.137541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2248940 ] 00:20:29.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.231 [2024-06-10 12:09:18.203430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.231 [2024-06-10 12:09:18.269117] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.488 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:29.488 12:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:29.488 12:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.MLTEnqm5E5 00:20:29.746 [2024-06-10 12:09:19.075679] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.746 [2024-06-10 12:09:19.075761] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.746 [2024-06-10 12:09:19.080359] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:29.746 [2024-06-10 12:09:19.080384] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:29.746 [2024-06-10 12:09:19.080413] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:29.746 [2024-06-10 12:09:19.081063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d1420 (107): Transport endpoint is not connected 00:20:29.746 [2024-06-10 12:09:19.082054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d1420 (9): Bad file descriptor 00:20:29.746 [2024-06-10 12:09:19.083056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:29.746 [2024-06-10 12:09:19.083067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:29.746 [2024-06-10 12:09:19.083077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:29.746 request: 00:20:29.746 { 00:20:29.746 "name": "TLSTEST", 00:20:29.746 "trtype": "tcp", 00:20:29.746 "traddr": "10.0.0.2", 00:20:29.746 "adrfam": "ipv4", 00:20:29.746 "trsvcid": "4420", 00:20:29.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.746 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.746 "prchk_reftag": false, 00:20:29.746 "prchk_guard": false, 00:20:29.746 "hdgst": false, 00:20:29.746 "ddgst": false, 00:20:29.746 "psk": "/tmp/tmp.MLTEnqm5E5", 00:20:29.746 "method": "bdev_nvme_attach_controller", 00:20:29.746 "req_id": 1 00:20:29.746 } 00:20:29.746 Got JSON-RPC error response 00:20:29.746 response: 00:20:29.746 { 00:20:29.746 "code": -5, 00:20:29.746 "message": "Input/output error" 00:20:29.746 } 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2248940 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2248940 ']' 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2248940 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2248940 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2248940' 00:20:29.746 killing process with pid 2248940 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2248940 00:20:29.746 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.746 00:20:29.746 Latency(us) 00:20:29.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.746 =================================================================================================================== 00:20:29.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.746 [2024-06-10 12:09:19.152535] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.746 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2248940 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MLTEnqm5E5 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MLTEnqm5E5 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MLTEnqm5E5 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MLTEnqm5E5' 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2249164 00:20:30.003 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2249164 /var/tmp/bdevperf.sock 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2249164 ']' 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:30.004 12:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.004 [2024-06-10 12:09:19.373121] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:30.004 [2024-06-10 12:09:19.373173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249164 ] 00:20:30.004 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.004 [2024-06-10 12:09:19.438344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.004 [2024-06-10 12:09:19.501545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.935 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:30.935 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:30.935 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MLTEnqm5E5 00:20:30.935 [2024-06-10 12:09:20.327243] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.935 [2024-06-10 12:09:20.327348] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:30.936 [2024-06-10 12:09:20.331918] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:30.936 [2024-06-10 12:09:20.331943] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:30.936 [2024-06-10 12:09:20.331971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:30.936 [2024-06-10 12:09:20.332623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1175420 (107): Transport endpoint is not connected 00:20:30.936 [2024-06-10 12:09:20.333613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1175420 (9): Bad file descriptor 00:20:30.936 [2024-06-10 12:09:20.334615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:30.936 [2024-06-10 12:09:20.334627] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:30.936 [2024-06-10 12:09:20.334637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:30.936 request: 00:20:30.936 { 00:20:30.936 "name": "TLSTEST", 00:20:30.936 "trtype": "tcp", 00:20:30.936 "traddr": "10.0.0.2", 00:20:30.936 "adrfam": "ipv4", 00:20:30.936 "trsvcid": "4420", 00:20:30.936 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.936 "prchk_reftag": false, 00:20:30.936 "prchk_guard": false, 00:20:30.936 "hdgst": false, 00:20:30.936 "ddgst": false, 00:20:30.936 "psk": "/tmp/tmp.MLTEnqm5E5", 00:20:30.936 "method": "bdev_nvme_attach_controller", 00:20:30.936 "req_id": 1 00:20:30.936 } 00:20:30.936 Got JSON-RPC error response 00:20:30.936 response: 00:20:30.936 { 00:20:30.936 "code": -5, 00:20:30.936 "message": "Input/output error" 00:20:30.936 } 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2249164 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2249164 ']' 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2249164 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2249164 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2249164' 00:20:30.936 killing process with pid 2249164 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2249164 00:20:30.936 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.936 00:20:30.936 Latency(us) 00:20:30.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.936 =================================================================================================================== 00:20:30.936 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.936 [2024-06-10 12:09:20.407811] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:30.936 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2249164 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2249432 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2249432 /var/tmp/bdevperf.sock 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2249432 ']' 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:31.194 12:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.194 [2024-06-10 12:09:20.631894] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:31.194 [2024-06-10 12:09:20.631947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249432 ] 00:20:31.194 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.194 [2024-06-10 12:09:20.698541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.452 [2024-06-10 12:09:20.774098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.016 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:32.016 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:32.016 12:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:32.273 [2024-06-10 12:09:21.614865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:32.274 [2024-06-10 12:09:21.617136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014990 (9): Bad file descriptor 00:20:32.274 [2024-06-10 12:09:21.618133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:32.274 [2024-06-10 12:09:21.618146] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:32.274 [2024-06-10 12:09:21.618157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:32.274 request: 00:20:32.274 { 00:20:32.274 "name": "TLSTEST", 00:20:32.274 "trtype": "tcp", 00:20:32.274 "traddr": "10.0.0.2", 00:20:32.274 "adrfam": "ipv4", 00:20:32.274 "trsvcid": "4420", 00:20:32.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.274 "prchk_reftag": false, 00:20:32.274 "prchk_guard": false, 00:20:32.274 "hdgst": false, 00:20:32.274 "ddgst": false, 00:20:32.274 "method": "bdev_nvme_attach_controller", 00:20:32.274 "req_id": 1 00:20:32.274 } 00:20:32.274 Got JSON-RPC error response 00:20:32.274 response: 00:20:32.274 { 00:20:32.274 "code": -5, 00:20:32.274 "message": "Input/output error" 00:20:32.274 } 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2249432 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2249432 ']' 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2249432 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2249432 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2249432' 00:20:32.274 killing process with pid 2249432 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2249432 00:20:32.274 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.274 00:20:32.274 Latency(us) 00:20:32.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.274 =================================================================================================================== 00:20:32.274 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.274 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2249432 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2243749 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2243749 ']' 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2243749 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2243749 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2243749' 00:20:32.531 killing process with pid 2243749 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2243749 00:20:32.531 [2024-06-10 12:09:21.920739] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:32.531 12:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2243749 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.G8UBtl6C83 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.G8UBtl6C83 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2249723 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2249723 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2249723 ']' 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:32.788 12:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.788 [2024-06-10 12:09:22.225060] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:32.788 [2024-06-10 12:09:22.225108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.788 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.788 [2024-06-10 12:09:22.296409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.045 [2024-06-10 12:09:22.368112] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.045 [2024-06-10 12:09:22.368157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.045 [2024-06-10 12:09:22.368167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.045 [2024-06-10 12:09:22.368176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.045 [2024-06-10 12:09:22.368183] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.045 [2024-06-10 12:09:22.368203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.G8UBtl6C83 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8UBtl6C83 00:20:33.609 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.865 [2024-06-10 12:09:23.206580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.865 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.122 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.122 [2024-06-10 12:09:23.531389] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.122 [2024-06-10 12:09:23.531617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.122 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.379 malloc0 00:20:34.379 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.379 12:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:20:34.636 [2024-06-10 12:09:24.028883] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8UBtl6C83 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G8UBtl6C83' 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2250020 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2250020 /var/tmp/bdevperf.sock 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2250020 ']' 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:34.636 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.636 [2024-06-10 12:09:24.078869] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:34.636 [2024-06-10 12:09:24.078918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250020 ] 00:20:34.636 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.636 [2024-06-10 12:09:24.144301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.893 [2024-06-10 12:09:24.218671] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.455 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:35.455 12:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:35.455 12:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:20:35.712 [2024-06-10 12:09:25.025408] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.712 [2024-06-10 12:09:25.025494] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.712 TLSTESTn1 00:20:35.712 12:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:35.712 Running I/O for 10 seconds... 00:20:47.899 00:20:47.899 Latency(us) 00:20:47.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.899 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.899 Verification LBA range: start 0x0 length 0x2000 00:20:47.899 TLSTESTn1 : 10.02 3549.89 13.87 0.00 0.00 36011.01 7077.89 50121.93 00:20:47.899 =================================================================================================================== 00:20:47.899 Total : 3549.89 13.87 0.00 0.00 36011.01 7077.89 50121.93 00:20:47.899 0 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2250020 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2250020 ']' 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2250020 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2250020 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2250020' 00:20:47.899 killing process with pid 2250020 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2250020 00:20:47.899 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.899 00:20:47.899 Latency(us) 00:20:47.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.899 =================================================================================================================== 00:20:47.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.899 [2024-06-10 12:09:35.307788] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2250020 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.G8UBtl6C83 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8UBtl6C83 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8UBtl6C83 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G8UBtl6C83 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G8UBtl6C83' 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2251917 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2251917 /var/tmp/bdevperf.sock 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2251917 ']' 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:47.899 12:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.899 [2024-06-10 12:09:35.541317] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:47.899 [2024-06-10 12:09:35.541368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251917 ] 00:20:47.899 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.899 [2024-06-10 12:09:35.609308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.899 [2024-06-10 12:09:35.676386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.899 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:47.899 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:47.899 12:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:20:47.899 [2024-06-10 12:09:36.486009] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.899 [2024-06-10 12:09:36.486069] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:47.899 [2024-06-10 12:09:36.486078] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.G8UBtl6C83 00:20:47.899 request: 00:20:47.899 { 00:20:47.899 "name": "TLSTEST", 00:20:47.899 "trtype": "tcp", 00:20:47.899 "traddr": "10.0.0.2", 00:20:47.899 "adrfam": "ipv4", 00:20:47.899 "trsvcid": "4420", 00:20:47.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.899 "prchk_reftag": false, 00:20:47.899 "prchk_guard": false, 00:20:47.899 "hdgst": false, 00:20:47.899 "ddgst": false, 00:20:47.899 "psk": "/tmp/tmp.G8UBtl6C83", 00:20:47.899 "method": "bdev_nvme_attach_controller", 00:20:47.899 "req_id": 1 00:20:47.899 } 00:20:47.899 Got JSON-RPC error response 00:20:47.899 response: 00:20:47.899 { 00:20:47.899 "code": -1, 00:20:47.899 "message": "Operation not permitted" 00:20:47.899 } 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2251917 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2251917 ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2251917 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2251917 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2251917' 00:20:47.900 killing process with pid 2251917 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2251917 00:20:47.900 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.900 00:20:47.900 Latency(us) 00:20:47.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.900 =================================================================================================================== 00:20:47.900 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2251917 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2249723 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2249723 ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2249723 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2249723 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2249723' 00:20:47.900 killing process with pid 2249723 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2249723 00:20:47.900 [2024-06-10 12:09:36.788087] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2249723 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2252175 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2252175 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2252175 ']' 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:47.900 12:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.900 [2024-06-10 12:09:37.032998] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:47.900 [2024-06-10 12:09:37.033050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.900 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.900 [2024-06-10 12:09:37.106814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.900 [2024-06-10 12:09:37.173508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.900 [2024-06-10 12:09:37.173552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.900 [2024-06-10 12:09:37.173562] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.900 [2024-06-10 12:09:37.173570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.900 [2024-06-10 12:09:37.173593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.900 [2024-06-10 12:09:37.173617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.465 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:48.465 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:48.465 12:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.465 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:48.465 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.465 12:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.G8UBtl6C83 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.G8UBtl6C83 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.G8UBtl6C83 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8UBtl6C83 00:20:48.466 12:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.723 [2024-06-10 12:09:38.028579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.723 12:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:48.723 12:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:48.981 [2024-06-10 12:09:38.361407] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.981 [2024-06-10 12:09:38.361626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.981 12:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.266 malloc0 00:20:49.266 12:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.266 12:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:20:49.545 [2024-06-10 12:09:38.879192] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:49.545 [2024-06-10 12:09:38.879223] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:49.545 [2024-06-10 12:09:38.879266] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:49.545 request: 00:20:49.545 { 00:20:49.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.545 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.545 "psk": "/tmp/tmp.G8UBtl6C83", 00:20:49.545 "method": "nvmf_subsystem_add_host", 00:20:49.545 "req_id": 1 00:20:49.545 } 00:20:49.545 Got JSON-RPC error response 00:20:49.545 response: 00:20:49.545 { 00:20:49.545 "code": -32603, 00:20:49.545 "message": "Internal error" 00:20:49.545 } 00:20:49.545 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:49.545 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2252175 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2252175 ']' 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2252175 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2252175 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2252175' 00:20:49.546 killing process with pid 2252175 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2252175 00:20:49.546 12:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2252175 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.G8UBtl6C83 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2252729 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2252729 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2252729 ']' 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:49.803 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.803 [2024-06-10 12:09:39.204022] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:49.803 [2024-06-10 12:09:39.204068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.803 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.803 [2024-06-10 12:09:39.276643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.059 [2024-06-10 12:09:39.339779] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.059 [2024-06-10 12:09:39.339819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.059 [2024-06-10 12:09:39.339828] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.059 [2024-06-10 12:09:39.339837] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.059 [2024-06-10 12:09:39.339844] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.059 [2024-06-10 12:09:39.339867] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.626 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:50.626 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:50.626 12:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.626 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:50.626 12:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.626 12:09:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.626 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.G8UBtl6C83 00:20:50.626 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8UBtl6C83 00:20:50.626 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:50.883 [2024-06-10 12:09:40.198170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.883 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.883 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:51.140 [2024-06-10 12:09:40.551081] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.140 [2024-06-10 12:09:40.551297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.140 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:51.397 malloc0 00:20:51.397 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:51.397 12:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:20:51.654 [2024-06-10 12:09:41.060659] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2253025 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2253025 /var/tmp/bdevperf.sock 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2253025 ']' 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.654 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:51.655 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.655 [2024-06-10 12:09:41.124256] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:51.655 [2024-06-10 12:09:41.124306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253025 ] 00:20:51.655 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.913 [2024-06-10 12:09:41.190385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.913 [2024-06-10 12:09:41.260152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.477 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:52.477 12:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:52.477 12:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:20:52.733 [2024-06-10 12:09:42.091089] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.733 [2024-06-10 12:09:42.091172] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:52.733 TLSTESTn1 00:20:52.733 12:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:52.990 12:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:52.990 "subsystems": [ 00:20:52.990 { 00:20:52.990 "subsystem": "keyring", 00:20:52.990 "config": [] 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "subsystem": "iobuf", 00:20:52.990 "config": [ 00:20:52.990 { 00:20:52.990 "method": "iobuf_set_options", 00:20:52.990 "params": { 00:20:52.990 "small_pool_count": 8192, 00:20:52.990 "large_pool_count": 1024, 00:20:52.990 "small_bufsize": 8192, 00:20:52.990 "large_bufsize": 135168 00:20:52.990 } 00:20:52.990 } 00:20:52.990 ] 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "subsystem": "sock", 00:20:52.990 "config": [ 00:20:52.990 { 00:20:52.990 "method": "sock_set_default_impl", 00:20:52.990 "params": { 00:20:52.990 "impl_name": "posix" 00:20:52.990 } 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "method": "sock_impl_set_options", 00:20:52.990 "params": { 00:20:52.990 "impl_name": "ssl", 00:20:52.990 "recv_buf_size": 4096, 00:20:52.990 "send_buf_size": 4096, 00:20:52.990 "enable_recv_pipe": true, 00:20:52.990 "enable_quickack": false, 00:20:52.990 "enable_placement_id": 0, 00:20:52.990 "enable_zerocopy_send_server": true, 00:20:52.990 "enable_zerocopy_send_client": false, 00:20:52.990 "zerocopy_threshold": 0, 00:20:52.990 "tls_version": 0, 00:20:52.990 "enable_ktls": false 00:20:52.990 } 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "method": "sock_impl_set_options", 00:20:52.990 "params": { 00:20:52.990 "impl_name": "posix", 00:20:52.990 "recv_buf_size": 2097152, 00:20:52.990 "send_buf_size": 2097152, 00:20:52.990 "enable_recv_pipe": true, 00:20:52.990 "enable_quickack": false, 00:20:52.990 "enable_placement_id": 0, 00:20:52.990 "enable_zerocopy_send_server": true, 00:20:52.990 "enable_zerocopy_send_client": false, 00:20:52.990 "zerocopy_threshold": 0, 00:20:52.990 "tls_version": 0, 00:20:52.990 "enable_ktls": false 00:20:52.990 } 00:20:52.990 } 00:20:52.990 ] 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "subsystem": "vmd", 00:20:52.990 "config": [] 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "subsystem": "accel", 00:20:52.990 "config": [ 00:20:52.990 { 00:20:52.990 "method": "accel_set_options", 00:20:52.990 "params": { 00:20:52.990 "small_cache_size": 128, 00:20:52.990 "large_cache_size": 16, 00:20:52.990 "task_count": 2048, 00:20:52.990 "sequence_count": 2048, 00:20:52.990 "buf_count": 2048 00:20:52.990 } 00:20:52.990 } 00:20:52.990 ] 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "subsystem": "bdev", 00:20:52.990 "config": [ 00:20:52.990 { 00:20:52.990 "method": "bdev_set_options", 00:20:52.990 "params": { 00:20:52.990 "bdev_io_pool_size": 65535, 00:20:52.990 "bdev_io_cache_size": 256, 00:20:52.990 "bdev_auto_examine": true, 00:20:52.990 "iobuf_small_cache_size": 128, 00:20:52.990 "iobuf_large_cache_size": 16 00:20:52.990 } 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "method": "bdev_raid_set_options", 00:20:52.990 "params": { 00:20:52.990 "process_window_size_kb": 1024 00:20:52.990 } 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "method": "bdev_iscsi_set_options", 00:20:52.990 "params": { 00:20:52.990 "timeout_sec": 30 00:20:52.990 } 00:20:52.990 }, 00:20:52.990 { 00:20:52.990 "method": "bdev_nvme_set_options", 00:20:52.990 "params": { 00:20:52.990 "action_on_timeout": "none", 00:20:52.990 "timeout_us": 0, 00:20:52.990 "timeout_admin_us": 0, 00:20:52.990 "keep_alive_timeout_ms": 10000, 00:20:52.990 "arbitration_burst": 0, 00:20:52.990 "low_priority_weight": 0, 00:20:52.990 "medium_priority_weight": 0, 00:20:52.990 "high_priority_weight": 0, 00:20:52.990 "nvme_adminq_poll_period_us": 10000, 00:20:52.990 "nvme_ioq_poll_period_us": 0, 00:20:52.990 "io_queue_requests": 0, 00:20:52.990 "delay_cmd_submit": true, 00:20:52.990 "transport_retry_count": 4, 00:20:52.990 "bdev_retry_count": 3, 00:20:52.990 "transport_ack_timeout": 0, 00:20:52.990 "ctrlr_loss_timeout_sec": 0, 00:20:52.990 "reconnect_delay_sec": 0, 00:20:52.990 "fast_io_fail_timeout_sec": 0, 00:20:52.990 "disable_auto_failback": false, 00:20:52.990 "generate_uuids": false, 00:20:52.990 "transport_tos": 0, 00:20:52.990 "nvme_error_stat": false, 00:20:52.990 "rdma_srq_size": 0, 00:20:52.990 "io_path_stat": false, 00:20:52.990 "allow_accel_sequence": false, 00:20:52.990 "rdma_max_cq_size": 0, 00:20:52.990 "rdma_cm_event_timeout_ms": 0, 00:20:52.990 "dhchap_digests": [ 00:20:52.991 "sha256", 00:20:52.991 "sha384", 00:20:52.991 "sha512" 00:20:52.991 ], 00:20:52.991 "dhchap_dhgroups": [ 00:20:52.991 "null", 00:20:52.991 "ffdhe2048", 00:20:52.991 "ffdhe3072", 00:20:52.991 "ffdhe4096", 00:20:52.991 "ffdhe6144", 00:20:52.991 "ffdhe8192" 00:20:52.991 ] 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "bdev_nvme_set_hotplug", 00:20:52.991 "params": { 00:20:52.991 "period_us": 100000, 00:20:52.991 "enable": false 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "bdev_malloc_create", 00:20:52.991 "params": { 00:20:52.991 "name": "malloc0", 00:20:52.991 "num_blocks": 8192, 00:20:52.991 "block_size": 4096, 00:20:52.991 "physical_block_size": 4096, 00:20:52.991 "uuid": "c317d1d9-be0a-43bd-8345-0a001f97f66e", 00:20:52.991 "optimal_io_boundary": 0 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "bdev_wait_for_examine" 00:20:52.991 } 00:20:52.991 ] 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "subsystem": "nbd", 00:20:52.991 "config": [] 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "subsystem": "scheduler", 00:20:52.991 "config": [ 00:20:52.991 { 00:20:52.991 "method": "framework_set_scheduler", 00:20:52.991 "params": { 00:20:52.991 "name": "static" 00:20:52.991 } 00:20:52.991 } 00:20:52.991 ] 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "subsystem": "nvmf", 00:20:52.991 "config": [ 00:20:52.991 { 00:20:52.991 "method": "nvmf_set_config", 00:20:52.991 "params": { 00:20:52.991 "discovery_filter": "match_any", 00:20:52.991 "admin_cmd_passthru": { 00:20:52.991 "identify_ctrlr": false 00:20:52.991 } 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_set_max_subsystems", 00:20:52.991 "params": { 00:20:52.991 "max_subsystems": 1024 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_set_crdt", 00:20:52.991 "params": { 00:20:52.991 "crdt1": 0, 00:20:52.991 "crdt2": 0, 00:20:52.991 "crdt3": 0 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_create_transport", 00:20:52.991 "params": { 00:20:52.991 "trtype": "TCP", 00:20:52.991 "max_queue_depth": 128, 00:20:52.991 "max_io_qpairs_per_ctrlr": 127, 00:20:52.991 "in_capsule_data_size": 4096, 00:20:52.991 "max_io_size": 131072, 00:20:52.991 "io_unit_size": 131072, 00:20:52.991 "max_aq_depth": 128, 00:20:52.991 "num_shared_buffers": 511, 00:20:52.991 "buf_cache_size": 4294967295, 00:20:52.991 "dif_insert_or_strip": false, 00:20:52.991 "zcopy": false, 00:20:52.991 "c2h_success": false, 00:20:52.991 "sock_priority": 0, 00:20:52.991 "abort_timeout_sec": 1, 00:20:52.991 "ack_timeout": 0, 00:20:52.991 "data_wr_pool_size": 0 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_create_subsystem", 00:20:52.991 "params": { 00:20:52.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.991 "allow_any_host": false, 00:20:52.991 "serial_number": "SPDK00000000000001", 00:20:52.991 "model_number": "SPDK bdev Controller", 00:20:52.991 "max_namespaces": 10, 00:20:52.991 "min_cntlid": 1, 00:20:52.991 "max_cntlid": 65519, 00:20:52.991 "ana_reporting": false 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_subsystem_add_host", 00:20:52.991 "params": { 00:20:52.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.991 "host": "nqn.2016-06.io.spdk:host1", 00:20:52.991 "psk": "/tmp/tmp.G8UBtl6C83" 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_subsystem_add_ns", 00:20:52.991 "params": { 00:20:52.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.991 "namespace": { 00:20:52.991 "nsid": 1, 00:20:52.991 "bdev_name": "malloc0", 00:20:52.991 "nguid": "C317D1D9BE0A43BD83450A001F97F66E", 00:20:52.991 "uuid": "c317d1d9-be0a-43bd-8345-0a001f97f66e", 00:20:52.991 "no_auto_visible": false 00:20:52.991 } 00:20:52.991 } 00:20:52.991 }, 00:20:52.991 { 00:20:52.991 "method": "nvmf_subsystem_add_listener", 00:20:52.991 "params": { 00:20:52.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.991 "listen_address": { 00:20:52.991 "trtype": "TCP", 00:20:52.991 "adrfam": "IPv4", 00:20:52.991 "traddr": "10.0.0.2", 00:20:52.991 "trsvcid": "4420" 00:20:52.991 }, 00:20:52.991 "secure_channel": true 00:20:52.991 } 00:20:52.991 } 00:20:52.991 ] 00:20:52.991 } 00:20:52.991 ] 00:20:52.991 }' 00:20:52.991 12:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:53.249 12:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:53.249 "subsystems": [ 00:20:53.249 { 00:20:53.249 "subsystem": "keyring", 00:20:53.249 "config": [] 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "subsystem": "iobuf", 00:20:53.249 "config": [ 00:20:53.249 { 00:20:53.249 "method": "iobuf_set_options", 00:20:53.249 "params": { 00:20:53.249 "small_pool_count": 8192, 00:20:53.249 "large_pool_count": 1024, 00:20:53.249 "small_bufsize": 8192, 00:20:53.249 "large_bufsize": 135168 00:20:53.249 } 00:20:53.249 } 00:20:53.249 ] 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "subsystem": "sock", 00:20:53.249 "config": [ 00:20:53.249 { 00:20:53.249 "method": "sock_set_default_impl", 00:20:53.249 "params": { 00:20:53.249 "impl_name": "posix" 00:20:53.249 } 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "method": "sock_impl_set_options", 00:20:53.249 "params": { 00:20:53.249 "impl_name": "ssl", 00:20:53.249 "recv_buf_size": 4096, 00:20:53.249 "send_buf_size": 4096, 00:20:53.249 "enable_recv_pipe": true, 00:20:53.249 "enable_quickack": false, 00:20:53.249 "enable_placement_id": 0, 00:20:53.249 "enable_zerocopy_send_server": true, 00:20:53.249 "enable_zerocopy_send_client": false, 00:20:53.249 "zerocopy_threshold": 0, 00:20:53.249 "tls_version": 0, 00:20:53.249 "enable_ktls": false 00:20:53.249 } 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "method": "sock_impl_set_options", 00:20:53.249 "params": { 00:20:53.249 "impl_name": "posix", 00:20:53.249 "recv_buf_size": 2097152, 00:20:53.249 "send_buf_size": 2097152, 00:20:53.249 "enable_recv_pipe": true, 00:20:53.249 "enable_quickack": false, 00:20:53.249 "enable_placement_id": 0, 00:20:53.249 "enable_zerocopy_send_server": true, 00:20:53.249 "enable_zerocopy_send_client": false, 00:20:53.249 "zerocopy_threshold": 0, 00:20:53.249 "tls_version": 0, 00:20:53.249 "enable_ktls": false 00:20:53.249 } 00:20:53.249 } 00:20:53.249 ] 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "subsystem": "vmd", 00:20:53.249 "config": [] 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "subsystem": "accel", 00:20:53.249 "config": [ 00:20:53.249 { 00:20:53.249 "method": "accel_set_options", 00:20:53.249 "params": { 00:20:53.249 "small_cache_size": 128, 00:20:53.249 "large_cache_size": 16, 00:20:53.249 "task_count": 2048, 00:20:53.249 "sequence_count": 2048, 00:20:53.249 "buf_count": 2048 00:20:53.249 } 00:20:53.249 } 00:20:53.249 ] 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "subsystem": "bdev", 00:20:53.249 "config": [ 00:20:53.249 { 00:20:53.249 "method": "bdev_set_options", 00:20:53.249 "params": { 00:20:53.249 "bdev_io_pool_size": 65535, 00:20:53.249 "bdev_io_cache_size": 256, 00:20:53.249 "bdev_auto_examine": true, 00:20:53.249 "iobuf_small_cache_size": 128, 00:20:53.249 "iobuf_large_cache_size": 16 00:20:53.249 } 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "method": "bdev_raid_set_options", 00:20:53.249 "params": { 00:20:53.249 "process_window_size_kb": 1024 00:20:53.249 } 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "method": "bdev_iscsi_set_options", 00:20:53.249 "params": { 00:20:53.249 "timeout_sec": 30 00:20:53.249 } 00:20:53.249 }, 00:20:53.249 { 00:20:53.249 "method": "bdev_nvme_set_options", 00:20:53.249 "params": { 00:20:53.249 "action_on_timeout": "none", 00:20:53.249 "timeout_us": 0, 00:20:53.249 "timeout_admin_us": 0, 00:20:53.249 "keep_alive_timeout_ms": 10000, 00:20:53.249 "arbitration_burst": 0, 00:20:53.249 "low_priority_weight": 0, 00:20:53.249 "medium_priority_weight": 0, 00:20:53.249 "high_priority_weight": 0, 00:20:53.249 "nvme_adminq_poll_period_us": 10000, 00:20:53.249 "nvme_ioq_poll_period_us": 0, 00:20:53.250 "io_queue_requests": 512, 00:20:53.250 "delay_cmd_submit": true, 00:20:53.250 "transport_retry_count": 4, 00:20:53.250 "bdev_retry_count": 3, 00:20:53.250 "transport_ack_timeout": 0, 00:20:53.250 "ctrlr_loss_timeout_sec": 0, 00:20:53.250 "reconnect_delay_sec": 0, 00:20:53.250 "fast_io_fail_timeout_sec": 0, 00:20:53.250 "disable_auto_failback": false, 00:20:53.250 "generate_uuids": false, 00:20:53.250 "transport_tos": 0, 00:20:53.250 "nvme_error_stat": false, 00:20:53.250 "rdma_srq_size": 0, 00:20:53.250 "io_path_stat": false, 00:20:53.250 "allow_accel_sequence": false, 00:20:53.250 "rdma_max_cq_size": 0, 00:20:53.250 "rdma_cm_event_timeout_ms": 0, 00:20:53.250 "dhchap_digests": [ 00:20:53.250 "sha256", 00:20:53.250 "sha384", 00:20:53.250 "sha512" 00:20:53.250 ], 00:20:53.250 "dhchap_dhgroups": [ 00:20:53.250 "null", 00:20:53.250 "ffdhe2048", 00:20:53.250 "ffdhe3072", 00:20:53.250 "ffdhe4096", 00:20:53.250 "ffdhe6144", 00:20:53.250 "ffdhe8192" 00:20:53.250 ] 00:20:53.250 } 00:20:53.250 }, 00:20:53.250 { 00:20:53.250 "method": "bdev_nvme_attach_controller", 00:20:53.250 "params": { 00:20:53.250 "name": "TLSTEST", 00:20:53.250 "trtype": "TCP", 00:20:53.250 "adrfam": "IPv4", 00:20:53.250 "traddr": "10.0.0.2", 00:20:53.250 "trsvcid": "4420", 00:20:53.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.250 "prchk_reftag": false, 00:20:53.250 "prchk_guard": false, 00:20:53.250 "ctrlr_loss_timeout_sec": 0, 00:20:53.250 "reconnect_delay_sec": 0, 00:20:53.250 "fast_io_fail_timeout_sec": 0, 00:20:53.250 "psk": "/tmp/tmp.G8UBtl6C83", 00:20:53.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.250 "hdgst": false, 00:20:53.250 "ddgst": false 00:20:53.250 } 00:20:53.250 }, 00:20:53.250 { 00:20:53.250 "method": "bdev_nvme_set_hotplug", 00:20:53.250 "params": { 00:20:53.250 "period_us": 100000, 00:20:53.250 "enable": false 00:20:53.250 } 00:20:53.250 }, 00:20:53.250 { 00:20:53.250 "method": "bdev_wait_for_examine" 00:20:53.250 } 00:20:53.250 ] 00:20:53.250 }, 00:20:53.250 { 00:20:53.250 "subsystem": "nbd", 00:20:53.250 "config": [] 00:20:53.250 } 00:20:53.250 ] 00:20:53.250 }' 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2253025 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2253025 ']' 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2253025 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2253025 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2253025' 00:20:53.250 killing process with pid 2253025 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2253025 00:20:53.250 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.250 00:20:53.250 Latency(us) 00:20:53.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.250 =================================================================================================================== 00:20:53.250 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.250 [2024-06-10 12:09:42.717804] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:53.250 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2253025 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2252729 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2252729 ']' 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2252729 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2252729 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2252729' 00:20:53.508 killing process with pid 2252729 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2252729 00:20:53.508 [2024-06-10 12:09:42.950062] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:53.508 12:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2252729 00:20:53.767 12:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:53.767 12:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.767 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:53.767 12:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:53.767 "subsystems": [ 00:20:53.767 { 00:20:53.767 "subsystem": "keyring", 00:20:53.767 "config": [] 00:20:53.767 }, 00:20:53.767 { 00:20:53.767 "subsystem": "iobuf", 00:20:53.767 "config": [ 00:20:53.767 { 00:20:53.767 "method": "iobuf_set_options", 00:20:53.767 "params": { 00:20:53.767 "small_pool_count": 8192, 00:20:53.767 "large_pool_count": 1024, 00:20:53.767 "small_bufsize": 8192, 00:20:53.767 "large_bufsize": 135168 00:20:53.767 } 00:20:53.767 } 00:20:53.767 ] 00:20:53.767 }, 00:20:53.767 { 00:20:53.767 "subsystem": "sock", 00:20:53.767 "config": [ 00:20:53.767 { 00:20:53.767 "method": "sock_set_default_impl", 00:20:53.767 "params": { 00:20:53.767 "impl_name": "posix" 00:20:53.767 } 00:20:53.767 }, 00:20:53.767 { 00:20:53.767 "method": "sock_impl_set_options", 00:20:53.767 "params": { 00:20:53.767 "impl_name": "ssl", 00:20:53.767 "recv_buf_size": 4096, 00:20:53.767 "send_buf_size": 4096, 00:20:53.767 "enable_recv_pipe": true, 00:20:53.767 "enable_quickack": false, 00:20:53.767 "enable_placement_id": 0, 00:20:53.767 "enable_zerocopy_send_server": true, 00:20:53.767 "enable_zerocopy_send_client": false, 00:20:53.767 "zerocopy_threshold": 0, 00:20:53.767 "tls_version": 0, 00:20:53.767 "enable_ktls": false 00:20:53.767 } 00:20:53.767 }, 00:20:53.767 { 00:20:53.767 "method": "sock_impl_set_options", 00:20:53.767 "params": { 00:20:53.767 "impl_name": "posix", 00:20:53.767 "recv_buf_size": 2097152, 00:20:53.767 "send_buf_size": 2097152, 00:20:53.767 "enable_recv_pipe": true, 00:20:53.767 "enable_quickack": false, 00:20:53.767 "enable_placement_id": 0, 00:20:53.767 "enable_zerocopy_send_server": true, 00:20:53.767 "enable_zerocopy_send_client": false, 00:20:53.767 "zerocopy_threshold": 0, 00:20:53.768 "tls_version": 0, 00:20:53.768 "enable_ktls": false 00:20:53.768 } 00:20:53.768 } 00:20:53.768 ] 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "subsystem": "vmd", 00:20:53.768 "config": [] 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "subsystem": "accel", 00:20:53.768 "config": [ 00:20:53.768 { 00:20:53.768 "method": "accel_set_options", 00:20:53.768 "params": { 00:20:53.768 "small_cache_size": 128, 00:20:53.768 "large_cache_size": 16, 00:20:53.768 "task_count": 2048, 00:20:53.768 "sequence_count": 2048, 00:20:53.768 "buf_count": 2048 00:20:53.768 } 00:20:53.768 } 00:20:53.768 ] 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "subsystem": "bdev", 00:20:53.768 "config": [ 00:20:53.768 { 00:20:53.768 "method": "bdev_set_options", 00:20:53.768 "params": { 00:20:53.768 "bdev_io_pool_size": 65535, 00:20:53.768 "bdev_io_cache_size": 256, 00:20:53.768 "bdev_auto_examine": true, 00:20:53.768 "iobuf_small_cache_size": 128, 00:20:53.768 "iobuf_large_cache_size": 16 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "bdev_raid_set_options", 00:20:53.768 "params": { 00:20:53.768 "process_window_size_kb": 1024 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "bdev_iscsi_set_options", 00:20:53.768 "params": { 00:20:53.768 "timeout_sec": 30 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "bdev_nvme_set_options", 00:20:53.768 "params": { 00:20:53.768 "action_on_timeout": "none", 00:20:53.768 "timeout_us": 0, 00:20:53.768 "timeout_admin_us": 0, 00:20:53.768 "keep_alive_timeout_ms": 10000, 00:20:53.768 "arbitration_burst": 0, 00:20:53.768 "low_priority_weight": 0, 00:20:53.768 "medium_priority_weight": 0, 00:20:53.768 "high_priority_weight": 0, 00:20:53.768 "nvme_adminq_poll_period_us": 10000, 00:20:53.768 "nvme_ioq_poll_period_us": 0, 00:20:53.768 "io_queue_requests": 0, 00:20:53.768 "delay_cmd_submit": true, 00:20:53.768 "transport_retry_count": 4, 00:20:53.768 "bdev_retry_count": 3, 00:20:53.768 "transport_ack_timeout": 0, 00:20:53.768 "ctrlr_loss_timeout_sec": 0, 00:20:53.768 "reconnect_delay_sec": 0, 00:20:53.768 "fast_io_fail_timeout_sec": 0, 00:20:53.768 "disable_auto_failback": false, 00:20:53.768 "generate_uuids": false, 00:20:53.768 "transport_tos": 0, 00:20:53.768 "nvme_error_stat": false, 00:20:53.768 "rdma_srq_size": 0, 00:20:53.768 "io_path_stat": false, 00:20:53.768 "allow_accel_sequence": false, 00:20:53.768 "rdma_max_cq_size": 0, 00:20:53.768 "rdma_cm_event_timeout_ms": 0, 00:20:53.768 "dhchap_digests": [ 00:20:53.768 "sha256", 00:20:53.768 "sha384", 00:20:53.768 "sha512" 00:20:53.768 ], 00:20:53.768 "dhchap_dhgroups": [ 00:20:53.768 "null", 00:20:53.768 "ffdhe2048", 00:20:53.768 "ffdhe3072", 00:20:53.768 "ffdhe4096", 00:20:53.768 "ffdhe6144", 00:20:53.768 "ffdhe8192" 00:20:53.768 ] 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "bdev_nvme_set_hotplug", 00:20:53.768 "params": { 00:20:53.768 "period_us": 100000, 00:20:53.768 "enable": false 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "bdev_malloc_create", 00:20:53.768 "params": { 00:20:53.768 "name": "malloc0", 00:20:53.768 "num_blocks": 8192, 00:20:53.768 "block_size": 4096, 00:20:53.768 "physical_block_size": 4096, 00:20:53.768 "uuid": "c317d1d9-be0a-43bd-8345-0a001f97f66e", 00:20:53.768 "optimal_io_boundary": 0 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "bdev_wait_for_examine" 00:20:53.768 } 00:20:53.768 ] 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "subsystem": "nbd", 00:20:53.768 "config": [] 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "subsystem": "scheduler", 00:20:53.768 "config": [ 00:20:53.768 { 00:20:53.768 "method": "framework_set_scheduler", 00:20:53.768 "params": { 00:20:53.768 "name": "static" 00:20:53.768 } 00:20:53.768 } 00:20:53.768 ] 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "subsystem": "nvmf", 00:20:53.768 "config": [ 00:20:53.768 { 00:20:53.768 "method": "nvmf_set_config", 00:20:53.768 "params": { 00:20:53.768 "discovery_filter": "match_any", 00:20:53.768 "admin_cmd_passthru": { 00:20:53.768 "identify_ctrlr": false 00:20:53.768 } 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_set_max_subsystems", 00:20:53.768 "params": { 00:20:53.768 "max_subsystems": 1024 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_set_crdt", 00:20:53.768 "params": { 00:20:53.768 "crdt1": 0, 00:20:53.768 "crdt2": 0, 00:20:53.768 "crdt3": 0 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_create_transport", 00:20:53.768 "params": { 00:20:53.768 "trtype": "TCP", 00:20:53.768 "max_queue_depth": 128, 00:20:53.768 "max_io_qpairs_per_ctrlr": 127, 00:20:53.768 "in_capsule_data_size": 4096, 00:20:53.768 "max_io_size": 131072, 00:20:53.768 "io_unit_size": 131072, 00:20:53.768 "max_aq_depth": 128, 00:20:53.768 "num_shared_buffers": 511, 00:20:53.768 "buf_cache_size": 4294967295, 00:20:53.768 "dif_insert_or_strip": false, 00:20:53.768 "zcopy": false, 00:20:53.768 "c2h_success": false, 00:20:53.768 "sock_priority": 0, 00:20:53.768 "abort_timeout_sec": 1, 00:20:53.768 "ack_timeout": 0, 00:20:53.768 "data_wr_pool_size": 0 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_create_subsystem", 00:20:53.768 "params": { 00:20:53.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.768 "allow_any_host": false, 00:20:53.768 "serial_number": "SPDK00000000000001", 00:20:53.768 "model_number": "SPDK bdev Controller", 00:20:53.768 "max_namespaces": 10, 00:20:53.768 "min_cntlid": 1, 00:20:53.768 "max_cntlid": 65519, 00:20:53.768 "ana_reporting": false 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_subsystem_add_host", 00:20:53.768 "params": { 00:20:53.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.768 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.768 "psk": "/tmp/tmp.G8UBtl6C83" 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_subsystem_add_ns", 00:20:53.768 "params": { 00:20:53.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.768 "namespace": { 00:20:53.768 "nsid": 1, 00:20:53.768 "bdev_name": "malloc0", 00:20:53.768 "nguid": "C317D1D9BE0A43BD83450A001F97F66E", 00:20:53.768 "uuid": "c317d1d9-be0a-43bd-8345-0a001f97f66e", 00:20:53.768 "no_auto_visible": false 00:20:53.768 } 00:20:53.768 } 00:20:53.768 }, 00:20:53.768 { 00:20:53.768 "method": "nvmf_subsystem_add_listener", 00:20:53.768 "params": { 00:20:53.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.768 "listen_address": { 00:20:53.768 "trtype": "TCP", 00:20:53.768 "adrfam": "IPv4", 00:20:53.768 "traddr": "10.0.0.2", 00:20:53.768 "trsvcid": "4420" 00:20:53.768 }, 00:20:53.768 "secure_channel": true 00:20:53.768 } 00:20:53.768 } 00:20:53.768 ] 00:20:53.768 } 00:20:53.768 ] 00:20:53.768 }' 00:20:53.768 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.768 12:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2253318 00:20:53.768 12:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:53.768 12:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2253318 00:20:53.768 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2253318 ']' 00:20:53.768 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.769 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:53.769 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.769 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:53.769 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.769 [2024-06-10 12:09:43.196567] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:53.769 [2024-06-10 12:09:43.196617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.769 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.769 [2024-06-10 12:09:43.269568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.025 [2024-06-10 12:09:43.334857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.025 [2024-06-10 12:09:43.334891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.025 [2024-06-10 12:09:43.334901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.025 [2024-06-10 12:09:43.334909] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.025 [2024-06-10 12:09:43.334917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.025 [2024-06-10 12:09:43.334974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.025 [2024-06-10 12:09:43.537690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.283 [2024-06-10 12:09:43.553661] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:54.283 [2024-06-10 12:09:43.569704] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.283 [2024-06-10 12:09:43.579606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.540 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:54.540 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:54.540 12:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.540 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:54.540 12:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.540 12:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.540 12:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2253597 00:20:54.540 12:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2253597 /var/tmp/bdevperf.sock 00:20:54.540 12:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:54.540 "subsystems": [ 00:20:54.540 { 00:20:54.540 "subsystem": "keyring", 00:20:54.540 "config": [] 00:20:54.540 }, 00:20:54.540 { 00:20:54.540 "subsystem": "iobuf", 00:20:54.540 "config": [ 00:20:54.540 { 00:20:54.540 "method": "iobuf_set_options", 00:20:54.540 "params": { 00:20:54.540 "small_pool_count": 8192, 00:20:54.540 "large_pool_count": 1024, 00:20:54.540 "small_bufsize": 8192, 00:20:54.540 "large_bufsize": 135168 00:20:54.540 } 00:20:54.540 } 00:20:54.540 ] 00:20:54.540 }, 00:20:54.540 { 00:20:54.540 "subsystem": "sock", 00:20:54.540 "config": [ 00:20:54.540 { 00:20:54.540 "method": "sock_set_default_impl", 00:20:54.540 "params": { 00:20:54.540 "impl_name": "posix" 00:20:54.540 } 00:20:54.540 }, 00:20:54.540 { 00:20:54.540 "method": "sock_impl_set_options", 00:20:54.540 "params": { 00:20:54.540 "impl_name": "ssl", 00:20:54.540 "recv_buf_size": 4096, 00:20:54.540 "send_buf_size": 4096, 00:20:54.540 "enable_recv_pipe": true, 00:20:54.540 "enable_quickack": false, 00:20:54.540 "enable_placement_id": 0, 00:20:54.540 "enable_zerocopy_send_server": true, 00:20:54.540 "enable_zerocopy_send_client": false, 00:20:54.540 "zerocopy_threshold": 0, 00:20:54.540 "tls_version": 0, 00:20:54.540 "enable_ktls": false 00:20:54.540 } 00:20:54.540 }, 00:20:54.540 { 00:20:54.540 "method": "sock_impl_set_options", 00:20:54.540 "params": { 00:20:54.540 "impl_name": "posix", 00:20:54.540 "recv_buf_size": 2097152, 00:20:54.540 "send_buf_size": 2097152, 00:20:54.540 "enable_recv_pipe": true, 00:20:54.540 "enable_quickack": false, 00:20:54.540 "enable_placement_id": 0, 00:20:54.540 "enable_zerocopy_send_server": true, 00:20:54.540 "enable_zerocopy_send_client": false, 00:20:54.540 "zerocopy_threshold": 0, 00:20:54.540 "tls_version": 0, 00:20:54.540 "enable_ktls": false 00:20:54.540 } 00:20:54.540 } 00:20:54.540 ] 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "subsystem": "vmd", 00:20:54.541 "config": [] 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "subsystem": "accel", 00:20:54.541 "config": [ 00:20:54.541 { 00:20:54.541 "method": "accel_set_options", 00:20:54.541 "params": { 00:20:54.541 "small_cache_size": 128, 00:20:54.541 "large_cache_size": 16, 00:20:54.541 "task_count": 2048, 00:20:54.541 "sequence_count": 2048, 00:20:54.541 "buf_count": 2048 00:20:54.541 } 00:20:54.541 } 00:20:54.541 ] 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "subsystem": "bdev", 00:20:54.541 "config": [ 00:20:54.541 { 00:20:54.541 "method": "bdev_set_options", 00:20:54.541 "params": { 00:20:54.541 "bdev_io_pool_size": 65535, 00:20:54.541 "bdev_io_cache_size": 256, 00:20:54.541 "bdev_auto_examine": true, 00:20:54.541 "iobuf_small_cache_size": 128, 00:20:54.541 "iobuf_large_cache_size": 16 00:20:54.541 } 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "method": "bdev_raid_set_options", 00:20:54.541 "params": { 00:20:54.541 "process_window_size_kb": 1024 00:20:54.541 } 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "method": "bdev_iscsi_set_options", 00:20:54.541 "params": { 00:20:54.541 "timeout_sec": 30 00:20:54.541 } 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "method": "bdev_nvme_set_options", 00:20:54.541 "params": { 00:20:54.541 "action_on_timeout": "none", 00:20:54.541 "timeout_us": 0, 00:20:54.541 "timeout_admin_us": 0, 00:20:54.541 "keep_alive_timeout_ms": 10000, 00:20:54.541 "arbitration_burst": 0, 00:20:54.541 "low_priority_weight": 0, 00:20:54.541 "medium_priority_weight": 0, 00:20:54.541 "high_priority_weight": 0, 00:20:54.541 "nvme_adminq_poll_period_us": 10000, 00:20:54.541 "nvme_ioq_poll_period_us": 0, 00:20:54.541 "io_queue_requests": 512, 00:20:54.541 "delay_cmd_submit": true, 00:20:54.541 "transport_retry_count": 4, 00:20:54.541 "bdev_retry_count": 3, 00:20:54.541 "transport_ack_timeout": 0, 00:20:54.541 "ctrlr_loss_timeout_sec": 0, 00:20:54.541 "reconnect_delay_sec": 0, 00:20:54.541 "fast_io_fail_timeout_sec": 0, 00:20:54.541 "disable_auto_failback": false, 00:20:54.541 "generate_uuids": false, 00:20:54.541 "transport_tos": 0, 00:20:54.541 "nvme_error_stat": false, 00:20:54.541 "rdma_srq_size": 0, 00:20:54.541 "io_path_stat": false, 00:20:54.541 "allow_accel_sequence": false, 00:20:54.541 "rdma_max_cq_size": 0, 00:20:54.541 "rdma_cm_event_timeout_ms": 0, 00:20:54.541 "dhchap_digests": [ 00:20:54.541 "sha256", 00:20:54.541 "sha384", 00:20:54.541 "sha512" 00:20:54.541 ], 00:20:54.541 "dhchap_dhgroups": [ 00:20:54.541 "null", 00:20:54.541 "ffdhe2048", 00:20:54.541 "ffdhe3072", 00:20:54.541 "ffdhe4096", 00:20:54.541 "ffdhe6144", 00:20:54.541 "ffdhe8192" 00:20:54.541 ] 00:20:54.541 } 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "method": "bdev_nvme_attach_controller", 00:20:54.541 "params": { 00:20:54.541 "name": "TLSTEST", 00:20:54.541 "trtype": "TCP", 00:20:54.541 "adrfam": "IPv4", 00:20:54.541 "traddr": "10.0.0.2", 00:20:54.541 "trsvcid": "4420", 00:20:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.541 "prchk_reftag": false, 00:20:54.541 "prchk_guard": false, 00:20:54.541 "ctrlr_loss_timeout_sec": 0, 00:20:54.541 "reconnect_delay_sec": 0, 00:20:54.541 "fast_io_fail_timeout_sec": 0, 00:20:54.541 "psk": "/tmp/tmp.G8UBtl6C83", 00:20:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.541 "hdgst": false, 00:20:54.541 "ddgst": false 00:20:54.541 } 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "method": "bdev_nvme_set_hotplug", 00:20:54.541 "params": { 00:20:54.541 "period_us": 100000, 00:20:54.541 "enable": false 00:20:54.541 } 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "method": "bdev_wait_for_examine" 00:20:54.541 } 00:20:54.541 ] 00:20:54.541 }, 00:20:54.541 { 00:20:54.541 "subsystem": "nbd", 00:20:54.541 "config": [] 00:20:54.541 } 00:20:54.541 ] 00:20:54.541 }' 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2253597 ']' 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:54.541 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.799 [2024-06-10 12:09:44.078663] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:20:54.799 [2024-06-10 12:09:44.078714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253597 ] 00:20:54.799 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.799 [2024-06-10 12:09:44.144097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.799 [2024-06-10 12:09:44.213029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.056 [2024-06-10 12:09:44.355714] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.056 [2024-06-10 12:09:44.355810] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.621 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:55.621 12:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:55.621 12:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.621 Running I/O for 10 seconds... 00:21:05.587 00:21:05.587 Latency(us) 00:21:05.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.587 Verification LBA range: start 0x0 length 0x2000 00:21:05.587 TLSTESTn1 : 10.02 4932.55 19.27 0.00 0.00 25910.57 4561.31 53477.38 00:21:05.587 =================================================================================================================== 00:21:05.587 Total : 4932.55 19.27 0.00 0.00 25910.57 4561.31 53477.38 00:21:05.587 0 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2253597 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2253597 ']' 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2253597 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2253597 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2253597' 00:21:05.587 killing process with pid 2253597 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2253597 00:21:05.587 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.587 00:21:05.587 Latency(us) 00:21:05.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.587 =================================================================================================================== 00:21:05.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.587 [2024-06-10 12:09:55.075180] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:05.587 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2253597 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2253318 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2253318 ']' 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2253318 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2253318 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2253318' 00:21:05.845 killing process with pid 2253318 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2253318 00:21:05.845 [2024-06-10 12:09:55.310678] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:05.845 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2253318 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2255469 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2255469 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2255469 ']' 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:06.104 12:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.104 [2024-06-10 12:09:55.560253] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:06.104 [2024-06-10 12:09:55.560302] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.104 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.362 [2024-06-10 12:09:55.632938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.362 [2024-06-10 12:09:55.703598] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.362 [2024-06-10 12:09:55.703641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.362 [2024-06-10 12:09:55.703651] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.362 [2024-06-10 12:09:55.703660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.362 [2024-06-10 12:09:55.703683] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.362 [2024-06-10 12:09:55.703704] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.G8UBtl6C83 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G8UBtl6C83 00:21:06.928 12:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.186 [2024-06-10 12:09:56.549436] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.186 12:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.444 12:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.444 [2024-06-10 12:09:56.894309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.444 [2024-06-10 12:09:56.894529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.444 12:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.703 malloc0 00:21:07.703 12:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8UBtl6C83 00:21:07.959 [2024-06-10 12:09:57.411962] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2255766 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2255766 /var/tmp/bdevperf.sock 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2255766 ']' 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:07.959 12:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.217 [2024-06-10 12:09:57.479983] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:08.217 [2024-06-10 12:09:57.480035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255766 ] 00:21:08.217 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.217 [2024-06-10 12:09:57.552213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.217 [2024-06-10 12:09:57.622091] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.784 12:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:08.784 12:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:08.784 12:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G8UBtl6C83 00:21:09.043 12:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:09.302 [2024-06-10 12:09:58.576013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.302 nvme0n1 00:21:09.302 12:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.302 Running I/O for 1 seconds... 00:21:10.238 00:21:10.238 Latency(us) 00:21:10.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.238 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.238 Verification LBA range: start 0x0 length 0x2000 00:21:10.238 nvme0n1 : 1.01 5496.11 21.47 0.00 0.00 23118.84 5898.24 28521.27 00:21:10.238 =================================================================================================================== 00:21:10.238 Total : 5496.11 21.47 0.00 0.00 23118.84 5898.24 28521.27 00:21:10.238 0 00:21:10.497 12:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2255766 00:21:10.497 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2255766 ']' 00:21:10.497 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2255766 00:21:10.497 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:10.497 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:10.497 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2255766 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2255766' 00:21:10.498 killing process with pid 2255766 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2255766 00:21:10.498 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.498 00:21:10.498 Latency(us) 00:21:10.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.498 =================================================================================================================== 00:21:10.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2255766 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2255469 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2255469 ']' 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2255469 00:21:10.498 12:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:10.498 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:10.498 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2255469 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2255469' 00:21:10.757 killing process with pid 2255469 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2255469 00:21:10.757 [2024-06-10 12:10:00.055384] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2255469 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2256306 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2256306 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2256306 ']' 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:10.757 12:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.016 [2024-06-10 12:10:00.308572] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:11.016 [2024-06-10 12:10:00.308623] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.016 [2024-06-10 12:10:00.383211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.016 [2024-06-10 12:10:00.448312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.016 [2024-06-10 12:10:00.448356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.016 [2024-06-10 12:10:00.448366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.016 [2024-06-10 12:10:00.448378] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.016 [2024-06-10 12:10:00.448385] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.016 [2024-06-10 12:10:00.448407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.951 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:11.951 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:11.951 12:10:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.951 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:11.951 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 [2024-06-10 12:10:01.156299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.952 malloc0 00:21:11.952 [2024-06-10 12:10:01.184749] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.952 [2024-06-10 12:10:01.184960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2256584 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2256584 /var/tmp/bdevperf.sock 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2256584 ']' 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:11.952 12:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 [2024-06-10 12:10:01.261749] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:11.952 [2024-06-10 12:10:01.261793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256584 ] 00:21:11.952 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.952 [2024-06-10 12:10:01.331379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.952 [2024-06-10 12:10:01.400196] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.887 12:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:12.887 12:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:12.887 12:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G8UBtl6C83 00:21:12.887 12:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.887 [2024-06-10 12:10:02.370836] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.148 nvme0n1 00:21:13.148 12:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.148 Running I/O for 1 seconds... 00:21:14.084 00:21:14.084 Latency(us) 00:21:14.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.084 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.084 Verification LBA range: start 0x0 length 0x2000 00:21:14.084 nvme0n1 : 1.02 5134.12 20.06 0.00 0.00 24716.54 6973.03 76755.76 00:21:14.084 =================================================================================================================== 00:21:14.084 Total : 5134.12 20.06 0.00 0.00 24716.54 6973.03 76755.76 00:21:14.084 0 00:21:14.084 12:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:14.084 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.084 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.343 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.343 12:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:14.343 "subsystems": [ 00:21:14.343 { 00:21:14.343 "subsystem": "keyring", 00:21:14.343 "config": [ 00:21:14.343 { 00:21:14.343 "method": "keyring_file_add_key", 00:21:14.343 "params": { 00:21:14.343 "name": "key0", 00:21:14.343 "path": "/tmp/tmp.G8UBtl6C83" 00:21:14.343 } 00:21:14.343 } 00:21:14.343 ] 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "subsystem": "iobuf", 00:21:14.343 "config": [ 00:21:14.343 { 00:21:14.343 "method": "iobuf_set_options", 00:21:14.343 "params": { 00:21:14.343 "small_pool_count": 8192, 00:21:14.343 "large_pool_count": 1024, 00:21:14.343 "small_bufsize": 8192, 00:21:14.343 "large_bufsize": 135168 00:21:14.343 } 00:21:14.343 } 00:21:14.343 ] 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "subsystem": "sock", 00:21:14.343 "config": [ 00:21:14.343 { 00:21:14.343 "method": "sock_set_default_impl", 00:21:14.343 "params": { 00:21:14.343 "impl_name": "posix" 00:21:14.343 } 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "method": "sock_impl_set_options", 00:21:14.343 "params": { 00:21:14.343 "impl_name": "ssl", 00:21:14.343 "recv_buf_size": 4096, 00:21:14.343 "send_buf_size": 4096, 00:21:14.343 "enable_recv_pipe": true, 00:21:14.343 "enable_quickack": false, 00:21:14.343 "enable_placement_id": 0, 00:21:14.343 "enable_zerocopy_send_server": true, 00:21:14.343 "enable_zerocopy_send_client": false, 00:21:14.343 "zerocopy_threshold": 0, 00:21:14.343 "tls_version": 0, 00:21:14.343 "enable_ktls": false 00:21:14.343 } 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "method": "sock_impl_set_options", 00:21:14.343 "params": { 00:21:14.343 "impl_name": "posix", 00:21:14.343 "recv_buf_size": 2097152, 00:21:14.343 "send_buf_size": 2097152, 00:21:14.343 "enable_recv_pipe": true, 00:21:14.343 "enable_quickack": false, 00:21:14.343 "enable_placement_id": 0, 00:21:14.343 "enable_zerocopy_send_server": true, 00:21:14.343 "enable_zerocopy_send_client": false, 00:21:14.343 "zerocopy_threshold": 0, 00:21:14.343 "tls_version": 0, 00:21:14.343 "enable_ktls": false 00:21:14.343 } 00:21:14.343 } 00:21:14.343 ] 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "subsystem": "vmd", 00:21:14.343 "config": [] 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "subsystem": "accel", 00:21:14.343 "config": [ 00:21:14.343 { 00:21:14.343 "method": "accel_set_options", 00:21:14.343 "params": { 00:21:14.343 "small_cache_size": 128, 00:21:14.343 "large_cache_size": 16, 00:21:14.343 "task_count": 2048, 00:21:14.343 "sequence_count": 2048, 00:21:14.343 "buf_count": 2048 00:21:14.343 } 00:21:14.343 } 00:21:14.343 ] 00:21:14.343 }, 00:21:14.343 { 00:21:14.343 "subsystem": "bdev", 00:21:14.343 "config": [ 00:21:14.343 { 00:21:14.344 "method": "bdev_set_options", 00:21:14.344 "params": { 00:21:14.344 "bdev_io_pool_size": 65535, 00:21:14.344 "bdev_io_cache_size": 256, 00:21:14.344 "bdev_auto_examine": true, 00:21:14.344 "iobuf_small_cache_size": 128, 00:21:14.344 "iobuf_large_cache_size": 16 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "bdev_raid_set_options", 00:21:14.344 "params": { 00:21:14.344 "process_window_size_kb": 1024 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "bdev_iscsi_set_options", 00:21:14.344 "params": { 00:21:14.344 "timeout_sec": 30 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "bdev_nvme_set_options", 00:21:14.344 "params": { 00:21:14.344 "action_on_timeout": "none", 00:21:14.344 "timeout_us": 0, 00:21:14.344 "timeout_admin_us": 0, 00:21:14.344 "keep_alive_timeout_ms": 10000, 00:21:14.344 "arbitration_burst": 0, 00:21:14.344 "low_priority_weight": 0, 00:21:14.344 "medium_priority_weight": 0, 00:21:14.344 "high_priority_weight": 0, 00:21:14.344 "nvme_adminq_poll_period_us": 10000, 00:21:14.344 "nvme_ioq_poll_period_us": 0, 00:21:14.344 "io_queue_requests": 0, 00:21:14.344 "delay_cmd_submit": true, 00:21:14.344 "transport_retry_count": 4, 00:21:14.344 "bdev_retry_count": 3, 00:21:14.344 "transport_ack_timeout": 0, 00:21:14.344 "ctrlr_loss_timeout_sec": 0, 00:21:14.344 "reconnect_delay_sec": 0, 00:21:14.344 "fast_io_fail_timeout_sec": 0, 00:21:14.344 "disable_auto_failback": false, 00:21:14.344 "generate_uuids": false, 00:21:14.344 "transport_tos": 0, 00:21:14.344 "nvme_error_stat": false, 00:21:14.344 "rdma_srq_size": 0, 00:21:14.344 "io_path_stat": false, 00:21:14.344 "allow_accel_sequence": false, 00:21:14.344 "rdma_max_cq_size": 0, 00:21:14.344 "rdma_cm_event_timeout_ms": 0, 00:21:14.344 "dhchap_digests": [ 00:21:14.344 "sha256", 00:21:14.344 "sha384", 00:21:14.344 "sha512" 00:21:14.344 ], 00:21:14.344 "dhchap_dhgroups": [ 00:21:14.344 "null", 00:21:14.344 "ffdhe2048", 00:21:14.344 "ffdhe3072", 00:21:14.344 "ffdhe4096", 00:21:14.344 "ffdhe6144", 00:21:14.344 "ffdhe8192" 00:21:14.344 ] 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "bdev_nvme_set_hotplug", 00:21:14.344 "params": { 00:21:14.344 "period_us": 100000, 00:21:14.344 "enable": false 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "bdev_malloc_create", 00:21:14.344 "params": { 00:21:14.344 "name": "malloc0", 00:21:14.344 "num_blocks": 8192, 00:21:14.344 "block_size": 4096, 00:21:14.344 "physical_block_size": 4096, 00:21:14.344 "uuid": "f32d473b-7416-4ee6-8322-4229449e838c", 00:21:14.344 "optimal_io_boundary": 0 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "bdev_wait_for_examine" 00:21:14.344 } 00:21:14.344 ] 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "subsystem": "nbd", 00:21:14.344 "config": [] 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "subsystem": "scheduler", 00:21:14.344 "config": [ 00:21:14.344 { 00:21:14.344 "method": "framework_set_scheduler", 00:21:14.344 "params": { 00:21:14.344 "name": "static" 00:21:14.344 } 00:21:14.344 } 00:21:14.344 ] 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "subsystem": "nvmf", 00:21:14.344 "config": [ 00:21:14.344 { 00:21:14.344 "method": "nvmf_set_config", 00:21:14.344 "params": { 00:21:14.344 "discovery_filter": "match_any", 00:21:14.344 "admin_cmd_passthru": { 00:21:14.344 "identify_ctrlr": false 00:21:14.344 } 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_set_max_subsystems", 00:21:14.344 "params": { 00:21:14.344 "max_subsystems": 1024 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_set_crdt", 00:21:14.344 "params": { 00:21:14.344 "crdt1": 0, 00:21:14.344 "crdt2": 0, 00:21:14.344 "crdt3": 0 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_create_transport", 00:21:14.344 "params": { 00:21:14.344 "trtype": "TCP", 00:21:14.344 "max_queue_depth": 128, 00:21:14.344 "max_io_qpairs_per_ctrlr": 127, 00:21:14.344 "in_capsule_data_size": 4096, 00:21:14.344 "max_io_size": 131072, 00:21:14.344 "io_unit_size": 131072, 00:21:14.344 "max_aq_depth": 128, 00:21:14.344 "num_shared_buffers": 511, 00:21:14.344 "buf_cache_size": 4294967295, 00:21:14.344 "dif_insert_or_strip": false, 00:21:14.344 "zcopy": false, 00:21:14.344 "c2h_success": false, 00:21:14.344 "sock_priority": 0, 00:21:14.344 "abort_timeout_sec": 1, 00:21:14.344 "ack_timeout": 0, 00:21:14.344 "data_wr_pool_size": 0 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_create_subsystem", 00:21:14.344 "params": { 00:21:14.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.344 "allow_any_host": false, 00:21:14.344 "serial_number": "00000000000000000000", 00:21:14.344 "model_number": "SPDK bdev Controller", 00:21:14.344 "max_namespaces": 32, 00:21:14.344 "min_cntlid": 1, 00:21:14.344 "max_cntlid": 65519, 00:21:14.344 "ana_reporting": false 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_subsystem_add_host", 00:21:14.344 "params": { 00:21:14.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.344 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.344 "psk": "key0" 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_subsystem_add_ns", 00:21:14.344 "params": { 00:21:14.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.344 "namespace": { 00:21:14.344 "nsid": 1, 00:21:14.344 "bdev_name": "malloc0", 00:21:14.344 "nguid": "F32D473B74164EE683224229449E838C", 00:21:14.344 "uuid": "f32d473b-7416-4ee6-8322-4229449e838c", 00:21:14.344 "no_auto_visible": false 00:21:14.344 } 00:21:14.344 } 00:21:14.344 }, 00:21:14.344 { 00:21:14.344 "method": "nvmf_subsystem_add_listener", 00:21:14.344 "params": { 00:21:14.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.344 "listen_address": { 00:21:14.344 "trtype": "TCP", 00:21:14.344 "adrfam": "IPv4", 00:21:14.344 "traddr": "10.0.0.2", 00:21:14.344 "trsvcid": "4420" 00:21:14.344 }, 00:21:14.344 "secure_channel": true 00:21:14.344 } 00:21:14.344 } 00:21:14.344 ] 00:21:14.344 } 00:21:14.344 ] 00:21:14.344 }' 00:21:14.344 12:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:14.604 12:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:14.604 "subsystems": [ 00:21:14.604 { 00:21:14.604 "subsystem": "keyring", 00:21:14.604 "config": [ 00:21:14.604 { 00:21:14.604 "method": "keyring_file_add_key", 00:21:14.604 "params": { 00:21:14.604 "name": "key0", 00:21:14.604 "path": "/tmp/tmp.G8UBtl6C83" 00:21:14.604 } 00:21:14.604 } 00:21:14.604 ] 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "subsystem": "iobuf", 00:21:14.604 "config": [ 00:21:14.604 { 00:21:14.604 "method": "iobuf_set_options", 00:21:14.604 "params": { 00:21:14.604 "small_pool_count": 8192, 00:21:14.604 "large_pool_count": 1024, 00:21:14.604 "small_bufsize": 8192, 00:21:14.604 "large_bufsize": 135168 00:21:14.604 } 00:21:14.604 } 00:21:14.604 ] 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "subsystem": "sock", 00:21:14.604 "config": [ 00:21:14.604 { 00:21:14.604 "method": "sock_set_default_impl", 00:21:14.604 "params": { 00:21:14.604 "impl_name": "posix" 00:21:14.604 } 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "method": "sock_impl_set_options", 00:21:14.604 "params": { 00:21:14.604 "impl_name": "ssl", 00:21:14.604 "recv_buf_size": 4096, 00:21:14.604 "send_buf_size": 4096, 00:21:14.604 "enable_recv_pipe": true, 00:21:14.604 "enable_quickack": false, 00:21:14.604 "enable_placement_id": 0, 00:21:14.604 "enable_zerocopy_send_server": true, 00:21:14.604 "enable_zerocopy_send_client": false, 00:21:14.604 "zerocopy_threshold": 0, 00:21:14.604 "tls_version": 0, 00:21:14.604 "enable_ktls": false 00:21:14.604 } 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "method": "sock_impl_set_options", 00:21:14.604 "params": { 00:21:14.604 "impl_name": "posix", 00:21:14.604 "recv_buf_size": 2097152, 00:21:14.604 "send_buf_size": 2097152, 00:21:14.604 "enable_recv_pipe": true, 00:21:14.604 "enable_quickack": false, 00:21:14.604 "enable_placement_id": 0, 00:21:14.604 "enable_zerocopy_send_server": true, 00:21:14.604 "enable_zerocopy_send_client": false, 00:21:14.604 "zerocopy_threshold": 0, 00:21:14.604 "tls_version": 0, 00:21:14.604 "enable_ktls": false 00:21:14.604 } 00:21:14.604 } 00:21:14.604 ] 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "subsystem": "vmd", 00:21:14.604 "config": [] 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "subsystem": "accel", 00:21:14.604 "config": [ 00:21:14.604 { 00:21:14.604 "method": "accel_set_options", 00:21:14.604 "params": { 00:21:14.604 "small_cache_size": 128, 00:21:14.604 "large_cache_size": 16, 00:21:14.604 "task_count": 2048, 00:21:14.604 "sequence_count": 2048, 00:21:14.604 "buf_count": 2048 00:21:14.604 } 00:21:14.604 } 00:21:14.604 ] 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "subsystem": "bdev", 00:21:14.604 "config": [ 00:21:14.604 { 00:21:14.604 "method": "bdev_set_options", 00:21:14.604 "params": { 00:21:14.604 "bdev_io_pool_size": 65535, 00:21:14.604 "bdev_io_cache_size": 256, 00:21:14.604 "bdev_auto_examine": true, 00:21:14.604 "iobuf_small_cache_size": 128, 00:21:14.604 "iobuf_large_cache_size": 16 00:21:14.604 } 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "method": "bdev_raid_set_options", 00:21:14.604 "params": { 00:21:14.604 "process_window_size_kb": 1024 00:21:14.604 } 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "method": "bdev_iscsi_set_options", 00:21:14.604 "params": { 00:21:14.604 "timeout_sec": 30 00:21:14.604 } 00:21:14.604 }, 00:21:14.604 { 00:21:14.604 "method": "bdev_nvme_set_options", 00:21:14.604 "params": { 00:21:14.604 "action_on_timeout": "none", 00:21:14.604 "timeout_us": 0, 00:21:14.604 "timeout_admin_us": 0, 00:21:14.604 "keep_alive_timeout_ms": 10000, 00:21:14.604 "arbitration_burst": 0, 00:21:14.604 "low_priority_weight": 0, 00:21:14.604 "medium_priority_weight": 0, 00:21:14.604 "high_priority_weight": 0, 00:21:14.604 "nvme_adminq_poll_period_us": 10000, 00:21:14.604 "nvme_ioq_poll_period_us": 0, 00:21:14.604 "io_queue_requests": 512, 00:21:14.604 "delay_cmd_submit": true, 00:21:14.604 "transport_retry_count": 4, 00:21:14.604 "bdev_retry_count": 3, 00:21:14.604 "transport_ack_timeout": 0, 00:21:14.604 "ctrlr_loss_timeout_sec": 0, 00:21:14.604 "reconnect_delay_sec": 0, 00:21:14.604 "fast_io_fail_timeout_sec": 0, 00:21:14.604 "disable_auto_failback": false, 00:21:14.604 "generate_uuids": false, 00:21:14.604 "transport_tos": 0, 00:21:14.604 "nvme_error_stat": false, 00:21:14.604 "rdma_srq_size": 0, 00:21:14.604 "io_path_stat": false, 00:21:14.604 "allow_accel_sequence": false, 00:21:14.604 "rdma_max_cq_size": 0, 00:21:14.604 "rdma_cm_event_timeout_ms": 0, 00:21:14.604 "dhchap_digests": [ 00:21:14.604 "sha256", 00:21:14.604 "sha384", 00:21:14.604 "sha512" 00:21:14.604 ], 00:21:14.604 "dhchap_dhgroups": [ 00:21:14.605 "null", 00:21:14.605 "ffdhe2048", 00:21:14.605 "ffdhe3072", 00:21:14.605 "ffdhe4096", 00:21:14.605 "ffdhe6144", 00:21:14.605 "ffdhe8192" 00:21:14.605 ] 00:21:14.605 } 00:21:14.605 }, 00:21:14.605 { 00:21:14.605 "method": "bdev_nvme_attach_controller", 00:21:14.605 "params": { 00:21:14.605 "name": "nvme0", 00:21:14.605 "trtype": "TCP", 00:21:14.605 "adrfam": "IPv4", 00:21:14.605 "traddr": "10.0.0.2", 00:21:14.605 "trsvcid": "4420", 00:21:14.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.605 "prchk_reftag": false, 00:21:14.605 "prchk_guard": false, 00:21:14.605 "ctrlr_loss_timeout_sec": 0, 00:21:14.605 "reconnect_delay_sec": 0, 00:21:14.605 "fast_io_fail_timeout_sec": 0, 00:21:14.605 "psk": "key0", 00:21:14.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.605 "hdgst": false, 00:21:14.605 "ddgst": false 00:21:14.605 } 00:21:14.605 }, 00:21:14.605 { 00:21:14.605 "method": "bdev_nvme_set_hotplug", 00:21:14.605 "params": { 00:21:14.605 "period_us": 100000, 00:21:14.605 "enable": false 00:21:14.605 } 00:21:14.605 }, 00:21:14.605 { 00:21:14.605 "method": "bdev_enable_histogram", 00:21:14.605 "params": { 00:21:14.605 "name": "nvme0n1", 00:21:14.605 "enable": true 00:21:14.605 } 00:21:14.605 }, 00:21:14.605 { 00:21:14.605 "method": "bdev_wait_for_examine" 00:21:14.605 } 00:21:14.605 ] 00:21:14.605 }, 00:21:14.605 { 00:21:14.605 "subsystem": "nbd", 00:21:14.605 "config": [] 00:21:14.605 } 00:21:14.605 ] 00:21:14.605 }' 00:21:14.605 12:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2256584 00:21:14.605 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2256584 ']' 00:21:14.605 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2256584 00:21:14.605 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:14.605 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:14.605 12:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2256584 00:21:14.605 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:14.605 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:14.605 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2256584' 00:21:14.605 killing process with pid 2256584 00:21:14.605 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2256584 00:21:14.605 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.605 00:21:14.605 Latency(us) 00:21:14.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.605 =================================================================================================================== 00:21:14.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.605 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2256584 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2256306 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2256306 ']' 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2256306 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2256306 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2256306' 00:21:14.863 killing process with pid 2256306 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2256306 00:21:14.863 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2256306 00:21:15.122 12:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:15.122 12:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.122 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:15.122 12:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:15.122 "subsystems": [ 00:21:15.122 { 00:21:15.122 "subsystem": "keyring", 00:21:15.122 "config": [ 00:21:15.122 { 00:21:15.122 "method": "keyring_file_add_key", 00:21:15.122 "params": { 00:21:15.122 "name": "key0", 00:21:15.122 "path": "/tmp/tmp.G8UBtl6C83" 00:21:15.122 } 00:21:15.122 } 00:21:15.122 ] 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "subsystem": "iobuf", 00:21:15.122 "config": [ 00:21:15.122 { 00:21:15.122 "method": "iobuf_set_options", 00:21:15.122 "params": { 00:21:15.122 "small_pool_count": 8192, 00:21:15.122 "large_pool_count": 1024, 00:21:15.122 "small_bufsize": 8192, 00:21:15.122 "large_bufsize": 135168 00:21:15.122 } 00:21:15.122 } 00:21:15.122 ] 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "subsystem": "sock", 00:21:15.122 "config": [ 00:21:15.122 { 00:21:15.122 "method": "sock_set_default_impl", 00:21:15.122 "params": { 00:21:15.122 "impl_name": "posix" 00:21:15.122 } 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "method": "sock_impl_set_options", 00:21:15.122 "params": { 00:21:15.122 "impl_name": "ssl", 00:21:15.122 "recv_buf_size": 4096, 00:21:15.122 "send_buf_size": 4096, 00:21:15.122 "enable_recv_pipe": true, 00:21:15.122 "enable_quickack": false, 00:21:15.122 "enable_placement_id": 0, 00:21:15.122 "enable_zerocopy_send_server": true, 00:21:15.122 "enable_zerocopy_send_client": false, 00:21:15.122 "zerocopy_threshold": 0, 00:21:15.122 "tls_version": 0, 00:21:15.122 "enable_ktls": false 00:21:15.122 } 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "method": "sock_impl_set_options", 00:21:15.122 "params": { 00:21:15.122 "impl_name": "posix", 00:21:15.122 "recv_buf_size": 2097152, 00:21:15.122 "send_buf_size": 2097152, 00:21:15.122 "enable_recv_pipe": true, 00:21:15.122 "enable_quickack": false, 00:21:15.122 "enable_placement_id": 0, 00:21:15.122 "enable_zerocopy_send_server": true, 00:21:15.122 "enable_zerocopy_send_client": false, 00:21:15.122 "zerocopy_threshold": 0, 00:21:15.122 "tls_version": 0, 00:21:15.122 "enable_ktls": false 00:21:15.122 } 00:21:15.122 } 00:21:15.122 ] 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "subsystem": "vmd", 00:21:15.122 "config": [] 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "subsystem": "accel", 00:21:15.122 "config": [ 00:21:15.122 { 00:21:15.122 "method": "accel_set_options", 00:21:15.122 "params": { 00:21:15.122 "small_cache_size": 128, 00:21:15.122 "large_cache_size": 16, 00:21:15.122 "task_count": 2048, 00:21:15.122 "sequence_count": 2048, 00:21:15.122 "buf_count": 2048 00:21:15.122 } 00:21:15.122 } 00:21:15.122 ] 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "subsystem": "bdev", 00:21:15.122 "config": [ 00:21:15.122 { 00:21:15.122 "method": "bdev_set_options", 00:21:15.122 "params": { 00:21:15.122 "bdev_io_pool_size": 65535, 00:21:15.122 "bdev_io_cache_size": 256, 00:21:15.122 "bdev_auto_examine": true, 00:21:15.122 "iobuf_small_cache_size": 128, 00:21:15.122 "iobuf_large_cache_size": 16 00:21:15.122 } 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "method": "bdev_raid_set_options", 00:21:15.122 "params": { 00:21:15.122 "process_window_size_kb": 1024 00:21:15.122 } 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "method": "bdev_iscsi_set_options", 00:21:15.122 "params": { 00:21:15.122 "timeout_sec": 30 00:21:15.122 } 00:21:15.122 }, 00:21:15.122 { 00:21:15.122 "method": "bdev_nvme_set_options", 00:21:15.122 "params": { 00:21:15.122 "action_on_timeout": "none", 00:21:15.122 "timeout_us": 0, 00:21:15.122 "timeout_admin_us": 0, 00:21:15.122 "keep_alive_timeout_ms": 10000, 00:21:15.122 "arbitration_burst": 0, 00:21:15.122 "low_priority_weight": 0, 00:21:15.122 "medium_priority_weight": 0, 00:21:15.122 "high_priority_weight": 0, 00:21:15.122 "nvme_adminq_poll_period_us": 10000, 00:21:15.122 "nvme_ioq_poll_period_us": 0, 00:21:15.122 "io_queue_requests": 0, 00:21:15.122 "delay_cmd_submit": true, 00:21:15.122 "transport_retry_count": 4, 00:21:15.122 "bdev_retry_count": 3, 00:21:15.122 "transport_ack_timeout": 0, 00:21:15.122 "ctrlr_loss_timeout_sec": 0, 00:21:15.123 "reconnect_delay_sec": 0, 00:21:15.123 "fast_io_fail_timeout_sec": 0, 00:21:15.123 "disable_auto_failback": false, 00:21:15.123 "generate_uuids": false, 00:21:15.123 "transport_tos": 0, 00:21:15.123 "nvme_error_stat": false, 00:21:15.123 "rdma_srq_size": 0, 00:21:15.123 "io_path_stat": false, 00:21:15.123 "allow_accel_sequence": false, 00:21:15.123 "rdma_max_cq_size": 0, 00:21:15.123 "rdma_cm_event_timeout_ms": 0, 00:21:15.123 "dhchap_digests": [ 00:21:15.123 "sha256", 00:21:15.123 "sha384", 00:21:15.123 "sha512" 00:21:15.123 ], 00:21:15.123 "dhchap_dhgroups": [ 00:21:15.123 "null", 00:21:15.123 "ffdhe2048", 00:21:15.123 "ffdhe3072", 00:21:15.123 "ffdhe4096", 00:21:15.123 "ffdhe6144", 00:21:15.123 "ffdhe8192" 00:21:15.123 ] 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "bdev_nvme_set_hotplug", 00:21:15.123 "params": { 00:21:15.123 "period_us": 100000, 00:21:15.123 "enable": false 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "bdev_malloc_create", 00:21:15.123 "params": { 00:21:15.123 "name": "malloc0", 00:21:15.123 "num_blocks": 8192, 00:21:15.123 "block_size": 4096, 00:21:15.123 "physical_block_size": 4096, 00:21:15.123 "uuid": "f32d473b-7416-4ee6-8322-4229449e838c", 00:21:15.123 "optimal_io_boundary": 0 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "bdev_wait_for_examine" 00:21:15.123 } 00:21:15.123 ] 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "subsystem": "nbd", 00:21:15.123 "config": [] 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "subsystem": "scheduler", 00:21:15.123 "config": [ 00:21:15.123 { 00:21:15.123 "method": "framework_set_scheduler", 00:21:15.123 "params": { 00:21:15.123 "name": "static" 00:21:15.123 } 00:21:15.123 } 00:21:15.123 ] 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "subsystem": "nvmf", 00:21:15.123 "config": [ 00:21:15.123 { 00:21:15.123 "method": "nvmf_set_config", 00:21:15.123 "params": { 00:21:15.123 "discovery_filter": "match_any", 00:21:15.123 "admin_cmd_passthru": { 00:21:15.123 "identify_ctrlr": false 00:21:15.123 } 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_set_max_subsystems", 00:21:15.123 "params": { 00:21:15.123 "max_subsystems": 1024 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_set_crdt", 00:21:15.123 "params": { 00:21:15.123 "crdt1": 0, 00:21:15.123 "crdt2": 0, 00:21:15.123 "crdt3": 0 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_create_transport", 00:21:15.123 "params": { 00:21:15.123 "trtype": "TCP", 00:21:15.123 "max_queue_depth": 128, 00:21:15.123 "max_io_qpairs_per_ctrlr": 127, 00:21:15.123 "in_capsule_data_size": 4096, 00:21:15.123 "max_io_size": 131072, 00:21:15.123 "io_unit_size": 131072, 00:21:15.123 "max_aq_depth": 128, 00:21:15.123 "num_shared_buffers": 511, 00:21:15.123 "buf_cache_size": 4294967295, 00:21:15.123 "dif_insert_or_strip": false, 00:21:15.123 "zcopy": false, 00:21:15.123 "c2h_success": false, 00:21:15.123 "sock_priority": 0, 00:21:15.123 "abort_timeout_sec": 1, 00:21:15.123 "ack_timeout": 0, 00:21:15.123 "data_wr_pool_size": 0 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_create_subsystem", 00:21:15.123 "params": { 00:21:15.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.123 "allow_any_host": false, 00:21:15.123 "serial_number": "00000000000000000000", 00:21:15.123 "model_number": "SPDK bdev Controller", 00:21:15.123 "max_namespaces": 32, 00:21:15.123 "min_cntlid": 1, 00:21:15.123 "max_cntlid": 65519, 00:21:15.123 "ana_reporting": false 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_subsystem_add_host", 00:21:15.123 "params": { 00:21:15.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.123 "host": "nqn.2016-06.io.spdk:host1", 00:21:15.123 "psk": "key0" 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_subsystem_add_ns", 00:21:15.123 "params": { 00:21:15.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.123 "namespace": { 00:21:15.123 "nsid": 1, 00:21:15.123 "bdev_name": "malloc0", 00:21:15.123 "nguid": "F32D473B74164EE683224229449E838C", 00:21:15.123 "uuid": "f32d473b-7416-4ee6-8322-4229449e838c", 00:21:15.123 "no_auto_visible": false 00:21:15.123 } 00:21:15.123 } 00:21:15.123 }, 00:21:15.123 { 00:21:15.123 "method": "nvmf_subsystem_add_listener", 00:21:15.123 "params": { 00:21:15.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.123 "listen_address": { 00:21:15.123 "trtype": "TCP", 00:21:15.123 "adrfam": "IPv4", 00:21:15.123 "traddr": "10.0.0.2", 00:21:15.123 "trsvcid": "4420" 00:21:15.123 }, 00:21:15.123 "secure_channel": true 00:21:15.123 } 00:21:15.123 } 00:21:15.123 ] 00:21:15.123 } 00:21:15.123 ] 00:21:15.123 }' 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2257137 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2257137 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2257137 ']' 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:15.123 12:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.123 [2024-06-10 12:10:04.491726] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:15.123 [2024-06-10 12:10:04.491782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.123 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.123 [2024-06-10 12:10:04.565044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.123 [2024-06-10 12:10:04.636746] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.123 [2024-06-10 12:10:04.636789] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.123 [2024-06-10 12:10:04.636803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.123 [2024-06-10 12:10:04.636829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.123 [2024-06-10 12:10:04.636837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.123 [2024-06-10 12:10:04.636910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.382 [2024-06-10 12:10:04.847038] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.382 [2024-06-10 12:10:04.879070] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.382 [2024-06-10 12:10:04.886850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2257170 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2257170 /var/tmp/bdevperf.sock 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2257170 ']' 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:15.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.950 12:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:15.950 "subsystems": [ 00:21:15.950 { 00:21:15.950 "subsystem": "keyring", 00:21:15.950 "config": [ 00:21:15.950 { 00:21:15.950 "method": "keyring_file_add_key", 00:21:15.950 "params": { 00:21:15.950 "name": "key0", 00:21:15.950 "path": "/tmp/tmp.G8UBtl6C83" 00:21:15.950 } 00:21:15.950 } 00:21:15.950 ] 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "subsystem": "iobuf", 00:21:15.950 "config": [ 00:21:15.950 { 00:21:15.950 "method": "iobuf_set_options", 00:21:15.950 "params": { 00:21:15.950 "small_pool_count": 8192, 00:21:15.950 "large_pool_count": 1024, 00:21:15.950 "small_bufsize": 8192, 00:21:15.950 "large_bufsize": 135168 00:21:15.950 } 00:21:15.950 } 00:21:15.950 ] 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "subsystem": "sock", 00:21:15.950 "config": [ 00:21:15.950 { 00:21:15.950 "method": "sock_set_default_impl", 00:21:15.950 "params": { 00:21:15.950 "impl_name": "posix" 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "sock_impl_set_options", 00:21:15.950 "params": { 00:21:15.950 "impl_name": "ssl", 00:21:15.950 "recv_buf_size": 4096, 00:21:15.950 "send_buf_size": 4096, 00:21:15.950 "enable_recv_pipe": true, 00:21:15.950 "enable_quickack": false, 00:21:15.950 "enable_placement_id": 0, 00:21:15.950 "enable_zerocopy_send_server": true, 00:21:15.950 "enable_zerocopy_send_client": false, 00:21:15.950 "zerocopy_threshold": 0, 00:21:15.950 "tls_version": 0, 00:21:15.950 "enable_ktls": false 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "sock_impl_set_options", 00:21:15.950 "params": { 00:21:15.950 "impl_name": "posix", 00:21:15.950 "recv_buf_size": 2097152, 00:21:15.950 "send_buf_size": 2097152, 00:21:15.950 "enable_recv_pipe": true, 00:21:15.950 "enable_quickack": false, 00:21:15.950 "enable_placement_id": 0, 00:21:15.950 "enable_zerocopy_send_server": true, 00:21:15.950 "enable_zerocopy_send_client": false, 00:21:15.950 "zerocopy_threshold": 0, 00:21:15.950 "tls_version": 0, 00:21:15.950 "enable_ktls": false 00:21:15.950 } 00:21:15.950 } 00:21:15.950 ] 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "subsystem": "vmd", 00:21:15.950 "config": [] 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "subsystem": "accel", 00:21:15.950 "config": [ 00:21:15.950 { 00:21:15.950 "method": "accel_set_options", 00:21:15.950 "params": { 00:21:15.950 "small_cache_size": 128, 00:21:15.950 "large_cache_size": 16, 00:21:15.950 "task_count": 2048, 00:21:15.950 "sequence_count": 2048, 00:21:15.950 "buf_count": 2048 00:21:15.950 } 00:21:15.950 } 00:21:15.950 ] 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "subsystem": "bdev", 00:21:15.950 "config": [ 00:21:15.950 { 00:21:15.950 "method": "bdev_set_options", 00:21:15.950 "params": { 00:21:15.950 "bdev_io_pool_size": 65535, 00:21:15.950 "bdev_io_cache_size": 256, 00:21:15.950 "bdev_auto_examine": true, 00:21:15.950 "iobuf_small_cache_size": 128, 00:21:15.950 "iobuf_large_cache_size": 16 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "bdev_raid_set_options", 00:21:15.950 "params": { 00:21:15.950 "process_window_size_kb": 1024 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "bdev_iscsi_set_options", 00:21:15.950 "params": { 00:21:15.950 "timeout_sec": 30 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "bdev_nvme_set_options", 00:21:15.950 "params": { 00:21:15.950 "action_on_timeout": "none", 00:21:15.950 "timeout_us": 0, 00:21:15.950 "timeout_admin_us": 0, 00:21:15.950 "keep_alive_timeout_ms": 10000, 00:21:15.950 "arbitration_burst": 0, 00:21:15.950 "low_priority_weight": 0, 00:21:15.950 "medium_priority_weight": 0, 00:21:15.950 "high_priority_weight": 0, 00:21:15.950 "nvme_adminq_poll_period_us": 10000, 00:21:15.950 "nvme_ioq_poll_period_us": 0, 00:21:15.950 "io_queue_requests": 512, 00:21:15.950 "delay_cmd_submit": true, 00:21:15.950 "transport_retry_count": 4, 00:21:15.950 "bdev_retry_count": 3, 00:21:15.950 "transport_ack_timeout": 0, 00:21:15.950 "ctrlr_loss_timeout_sec": 0, 00:21:15.950 "reconnect_delay_sec": 0, 00:21:15.950 "fast_io_fail_timeout_sec": 0, 00:21:15.950 "disable_auto_failback": false, 00:21:15.950 "generate_uuids": false, 00:21:15.950 "transport_tos": 0, 00:21:15.950 "nvme_error_stat": false, 00:21:15.950 "rdma_srq_size": 0, 00:21:15.950 "io_path_stat": false, 00:21:15.950 "allow_accel_sequence": false, 00:21:15.950 "rdma_max_cq_size": 0, 00:21:15.950 "rdma_cm_event_timeout_ms": 0, 00:21:15.950 "dhchap_digests": [ 00:21:15.950 "sha256", 00:21:15.950 "sha384", 00:21:15.950 "sha512" 00:21:15.950 ], 00:21:15.950 "dhchap_dhgroups": [ 00:21:15.950 "null", 00:21:15.950 "ffdhe2048", 00:21:15.950 "ffdhe3072", 00:21:15.950 "ffdhe4096", 00:21:15.950 "ffdhe6144", 00:21:15.950 "ffdhe8192" 00:21:15.950 ] 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "bdev_nvme_attach_controller", 00:21:15.950 "params": { 00:21:15.950 "name": "nvme0", 00:21:15.950 "trtype": "TCP", 00:21:15.950 "adrfam": "IPv4", 00:21:15.950 "traddr": "10.0.0.2", 00:21:15.950 "trsvcid": "4420", 00:21:15.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.950 "prchk_reftag": false, 00:21:15.950 "prchk_guard": false, 00:21:15.950 "ctrlr_loss_timeout_sec": 0, 00:21:15.950 "reconnect_delay_sec": 0, 00:21:15.950 "fast_io_fail_timeout_sec": 0, 00:21:15.950 "psk": "key0", 00:21:15.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.950 "hdgst": false, 00:21:15.950 "ddgst": false 00:21:15.950 } 00:21:15.950 }, 00:21:15.950 { 00:21:15.950 "method": "bdev_nvme_set_hotplug", 00:21:15.951 "params": { 00:21:15.951 "period_us": 100000, 00:21:15.951 "enable": false 00:21:15.951 } 00:21:15.951 }, 00:21:15.951 { 00:21:15.951 "method": "bdev_enable_histogram", 00:21:15.951 "params": { 00:21:15.951 "name": "nvme0n1", 00:21:15.951 "enable": true 00:21:15.951 } 00:21:15.951 }, 00:21:15.951 { 00:21:15.951 "method": "bdev_wait_for_examine" 00:21:15.951 } 00:21:15.951 ] 00:21:15.951 }, 00:21:15.951 { 00:21:15.951 "subsystem": "nbd", 00:21:15.951 "config": [] 00:21:15.951 } 00:21:15.951 ] 00:21:15.951 }' 00:21:15.951 [2024-06-10 12:10:05.376684] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:15.951 [2024-06-10 12:10:05.376739] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257170 ] 00:21:15.951 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.951 [2024-06-10 12:10:05.447685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.209 [2024-06-10 12:10:05.520996] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.210 [2024-06-10 12:10:05.670693] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.777 12:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:16.777 12:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:16.777 12:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:16.777 12:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:17.036 12:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.036 12:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:17.036 Running I/O for 1 seconds... 00:21:18.046 00:21:18.046 Latency(us) 00:21:18.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.046 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:18.046 Verification LBA range: start 0x0 length 0x2000 00:21:18.046 nvme0n1 : 1.01 5415.64 21.15 0.00 0.00 23462.25 6606.03 34183.58 00:21:18.046 =================================================================================================================== 00:21:18.046 Total : 5415.64 21.15 0.00 0.00 23462.25 6606.03 34183.58 00:21:18.046 0 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:18.046 nvmf_trace.0 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2257170 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2257170 ']' 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2257170 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.046 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2257170 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2257170' 00:21:18.305 killing process with pid 2257170 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2257170 00:21:18.305 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.305 00:21:18.305 Latency(us) 00:21:18.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.305 =================================================================================================================== 00:21:18.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2257170 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.305 rmmod nvme_tcp 00:21:18.305 rmmod nvme_fabrics 00:21:18.305 rmmod nvme_keyring 00:21:18.305 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2257137 ']' 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2257137 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2257137 ']' 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2257137 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2257137 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2257137' 00:21:18.564 killing process with pid 2257137 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2257137 00:21:18.564 12:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2257137 00:21:18.564 12:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.564 12:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.564 12:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.564 12:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.564 12:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.565 12:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.565 12:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.565 12:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.102 12:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.102 12:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MLTEnqm5E5 /tmp/tmp.pWYXcjeDBU /tmp/tmp.G8UBtl6C83 00:21:21.102 00:21:21.102 real 1m26.246s 00:21:21.102 user 2m6.414s 00:21:21.102 sys 0m34.796s 00:21:21.102 12:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:21.102 12:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.102 ************************************ 00:21:21.102 END TEST nvmf_tls 00:21:21.102 ************************************ 00:21:21.102 12:10:10 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:21.102 12:10:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:21.102 12:10:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:21.102 12:10:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.102 ************************************ 00:21:21.102 START TEST nvmf_fips 00:21:21.102 ************************************ 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:21.102 * Looking for test storage... 00:21:21.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.102 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:21:21.103 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:21:21.104 Error setting digest 00:21:21.104 00C29FF0EE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:21.104 00C29FF0EE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.104 12:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:27.668 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:27.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:27.668 Found net devices under 0000:af:00.0: cvl_0_0 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:27.668 Found net devices under 0000:af:00.1: cvl_0_1 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.668 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:21:27.669 00:21:27.669 --- 10.0.0.2 ping statistics --- 00:21:27.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.669 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:21:27.669 00:21:27.669 --- 10.0.0.1 ping statistics --- 00:21:27.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.669 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2261418 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2261418 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2261418 ']' 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:27.669 12:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:27.669 [2024-06-10 12:10:17.015534] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:27.669 [2024-06-10 12:10:17.015587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.669 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.669 [2024-06-10 12:10:17.088719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.669 [2024-06-10 12:10:17.160364] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.669 [2024-06-10 12:10:17.160404] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.669 [2024-06-10 12:10:17.160413] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.669 [2024-06-10 12:10:17.160422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.669 [2024-06-10 12:10:17.160429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.669 [2024-06-10 12:10:17.160454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.603 12:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:28.603 [2024-06-10 12:10:17.975565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.603 [2024-06-10 12:10:17.991562] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.603 [2024-06-10 12:10:17.991763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.603 [2024-06-10 12:10:18.019890] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:28.603 malloc0 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2261480 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2261480 /var/tmp/bdevperf.sock 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2261480 ']' 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.603 12:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.603 [2024-06-10 12:10:18.112942] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:21:28.603 [2024-06-10 12:10:18.112994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261480 ] 00:21:28.860 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.860 [2024-06-10 12:10:18.179112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.860 [2024-06-10 12:10:18.251106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.427 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:29.427 12:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:29.427 12:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:29.685 [2024-06-10 12:10:19.053623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.685 [2024-06-10 12:10:19.053704] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:29.685 TLSTESTn1 00:21:29.685 12:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.943 Running I/O for 10 seconds... 00:21:39.905 00:21:39.905 Latency(us) 00:21:39.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.905 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.905 Verification LBA range: start 0x0 length 0x2000 00:21:39.905 TLSTESTn1 : 10.02 5582.14 21.81 0.00 0.00 22894.19 4666.16 51589.94 00:21:39.905 =================================================================================================================== 00:21:39.905 Total : 5582.14 21.81 0.00 0.00 22894.19 4666.16 51589.94 00:21:39.905 0 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:39.905 nvmf_trace.0 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2261480 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2261480 ']' 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2261480 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:39.905 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2261480 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2261480' 00:21:40.163 killing process with pid 2261480 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2261480 00:21:40.163 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.163 00:21:40.163 Latency(us) 00:21:40.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.163 =================================================================================================================== 00:21:40.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.163 [2024-06-10 12:10:29.426741] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2261480 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.163 rmmod nvme_tcp 00:21:40.163 rmmod nvme_fabrics 00:21:40.163 rmmod nvme_keyring 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2261418 ']' 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2261418 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2261418 ']' 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2261418 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:40.163 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2261418 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2261418' 00:21:40.421 killing process with pid 2261418 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2261418 00:21:40.421 [2024-06-10 12:10:29.729564] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2261418 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.421 12:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.952 12:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.952 12:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.952 00:21:42.952 real 0m21.781s 00:21:42.952 user 0m22.312s 00:21:42.952 sys 0m10.243s 00:21:42.952 12:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:42.952 12:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:42.952 ************************************ 00:21:42.952 END TEST nvmf_fips 00:21:42.952 ************************************ 00:21:42.952 12:10:32 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:42.952 12:10:32 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:42.952 12:10:32 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:42.952 12:10:32 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:42.952 12:10:32 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.952 12:10:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.512 12:10:38 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:49.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:49.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:49.513 Found net devices under 0000:af:00.0: cvl_0_0 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:49.513 Found net devices under 0000:af:00.1: cvl_0_1 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:49.513 12:10:38 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:49.513 12:10:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:49.513 12:10:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:49.513 12:10:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.513 ************************************ 00:21:49.513 START TEST nvmf_perf_adq 00:21:49.513 ************************************ 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:49.513 * Looking for test storage... 00:21:49.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.513 12:10:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:56.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:56.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:56.074 Found net devices under 0000:af:00.0: cvl_0_0 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:56.074 Found net devices under 0000:af:00.1: cvl_0_1 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:56.074 12:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:56.639 12:10:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:58.614 12:10:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.883 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:03.884 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:03.884 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:03.884 Found net devices under 0000:af:00.0: cvl_0_0 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:03.884 Found net devices under 0000:af:00.1: cvl_0_1 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:22:03.884 00:22:03.884 --- 10.0.0.2 ping statistics --- 00:22:03.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.884 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:03.884 00:22:03.884 --- 10.0.0.1 ping statistics --- 00:22:03.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.884 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2271765 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2271765 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2271765 ']' 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:03.884 12:10:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.142 [2024-06-10 12:10:53.417167] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:04.142 [2024-06-10 12:10:53.417221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.142 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.142 [2024-06-10 12:10:53.492048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.142 [2024-06-10 12:10:53.567708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.142 [2024-06-10 12:10:53.567749] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.142 [2024-06-10 12:10:53.567759] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.142 [2024-06-10 12:10:53.567769] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.142 [2024-06-10 12:10:53.567776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.142 [2024-06-10 12:10:53.567823] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.142 [2024-06-10 12:10:53.567918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.142 [2024-06-10 12:10:53.568002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.142 [2024-06-10 12:10:53.568004] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.706 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:04.706 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:04.707 12:10:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.707 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:04.707 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.964 12:10:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.964 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:04.964 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:04.964 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.964 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:04.964 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 [2024-06-10 12:10:54.414148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 Malloc1 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.965 [2024-06-10 12:10:54.464982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2271953 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:04.965 12:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:05.222 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:07.116 "tick_rate": 2500000000, 00:22:07.116 "poll_groups": [ 00:22:07.116 { 00:22:07.116 "name": "nvmf_tgt_poll_group_000", 00:22:07.116 "admin_qpairs": 1, 00:22:07.116 "io_qpairs": 1, 00:22:07.116 "current_admin_qpairs": 1, 00:22:07.116 "current_io_qpairs": 1, 00:22:07.116 "pending_bdev_io": 0, 00:22:07.116 "completed_nvme_io": 20644, 00:22:07.116 "transports": [ 00:22:07.116 { 00:22:07.116 "trtype": "TCP" 00:22:07.116 } 00:22:07.116 ] 00:22:07.116 }, 00:22:07.116 { 00:22:07.116 "name": "nvmf_tgt_poll_group_001", 00:22:07.116 "admin_qpairs": 0, 00:22:07.116 "io_qpairs": 1, 00:22:07.116 "current_admin_qpairs": 0, 00:22:07.116 "current_io_qpairs": 1, 00:22:07.116 "pending_bdev_io": 0, 00:22:07.116 "completed_nvme_io": 20277, 00:22:07.116 "transports": [ 00:22:07.116 { 00:22:07.116 "trtype": "TCP" 00:22:07.116 } 00:22:07.116 ] 00:22:07.116 }, 00:22:07.116 { 00:22:07.116 "name": "nvmf_tgt_poll_group_002", 00:22:07.116 "admin_qpairs": 0, 00:22:07.116 "io_qpairs": 1, 00:22:07.116 "current_admin_qpairs": 0, 00:22:07.116 "current_io_qpairs": 1, 00:22:07.116 "pending_bdev_io": 0, 00:22:07.116 "completed_nvme_io": 20853, 00:22:07.116 "transports": [ 00:22:07.116 { 00:22:07.116 "trtype": "TCP" 00:22:07.116 } 00:22:07.116 ] 00:22:07.116 }, 00:22:07.116 { 00:22:07.116 "name": "nvmf_tgt_poll_group_003", 00:22:07.116 "admin_qpairs": 0, 00:22:07.116 "io_qpairs": 1, 00:22:07.116 "current_admin_qpairs": 0, 00:22:07.116 "current_io_qpairs": 1, 00:22:07.116 "pending_bdev_io": 0, 00:22:07.116 "completed_nvme_io": 20686, 00:22:07.116 "transports": [ 00:22:07.116 { 00:22:07.116 "trtype": "TCP" 00:22:07.116 } 00:22:07.116 ] 00:22:07.116 } 00:22:07.116 ] 00:22:07.116 }' 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:07.116 12:10:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2271953 00:22:15.297 Initializing NVMe Controllers 00:22:15.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:15.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:15.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:15.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:15.297 Initialization complete. Launching workers. 00:22:15.297 ======================================================== 00:22:15.297 Latency(us) 00:22:15.297 Device Information : IOPS MiB/s Average min max 00:22:15.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10849.30 42.38 5899.36 2443.12 10064.34 00:22:15.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10755.90 42.02 5951.46 1974.62 10727.39 00:22:15.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11031.10 43.09 5803.54 1841.68 10295.99 00:22:15.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10931.70 42.70 5856.12 2526.11 10602.54 00:22:15.297 ======================================================== 00:22:15.297 Total : 43567.99 170.19 5877.11 1841.68 10727.39 00:22:15.297 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.297 rmmod nvme_tcp 00:22:15.297 rmmod nvme_fabrics 00:22:15.297 rmmod nvme_keyring 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2271765 ']' 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2271765 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2271765 ']' 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2271765 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2271765 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2271765' 00:22:15.297 killing process with pid 2271765 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2271765 00:22:15.297 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2271765 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.557 12:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.090 12:11:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.090 12:11:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:18.090 12:11:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:19.027 12:11:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:20.930 12:11:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.210 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:26.211 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:26.211 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:26.211 Found net devices under 0000:af:00.0: cvl_0_0 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:26.211 Found net devices under 0000:af:00.1: cvl_0_1 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:26.211 00:22:26.211 --- 10.0.0.2 ping statistics --- 00:22:26.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.211 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:22:26.211 00:22:26.211 --- 10.0.0.1 ping statistics --- 00:22:26.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.211 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.211 12:11:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:26.477 net.core.busy_poll = 1 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:26.477 net.core.busy_read = 1 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:26.477 12:11:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2276033 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2276033 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2276033 ']' 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.737 12:11:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:26.737 [2024-06-10 12:11:16.070792] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:26.737 [2024-06-10 12:11:16.070842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.737 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.737 [2024-06-10 12:11:16.143940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.737 [2024-06-10 12:11:16.220705] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.737 [2024-06-10 12:11:16.220744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.737 [2024-06-10 12:11:16.220754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.737 [2024-06-10 12:11:16.220762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.737 [2024-06-10 12:11:16.220769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.737 [2024-06-10 12:11:16.220809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.737 [2024-06-10 12:11:16.220828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.737 [2024-06-10 12:11:16.220919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.737 [2024-06-10 12:11:16.220920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:27.671 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 [2024-06-10 12:11:17.076249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 Malloc1 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.672 [2024-06-10 12:11:17.126966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2276168 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:27.672 12:11:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:27.672 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:30.202 "tick_rate": 2500000000, 00:22:30.202 "poll_groups": [ 00:22:30.202 { 00:22:30.202 "name": "nvmf_tgt_poll_group_000", 00:22:30.202 "admin_qpairs": 1, 00:22:30.202 "io_qpairs": 1, 00:22:30.202 "current_admin_qpairs": 1, 00:22:30.202 "current_io_qpairs": 1, 00:22:30.202 "pending_bdev_io": 0, 00:22:30.202 "completed_nvme_io": 29193, 00:22:30.202 "transports": [ 00:22:30.202 { 00:22:30.202 "trtype": "TCP" 00:22:30.202 } 00:22:30.202 ] 00:22:30.202 }, 00:22:30.202 { 00:22:30.202 "name": "nvmf_tgt_poll_group_001", 00:22:30.202 "admin_qpairs": 0, 00:22:30.202 "io_qpairs": 3, 00:22:30.202 "current_admin_qpairs": 0, 00:22:30.202 "current_io_qpairs": 3, 00:22:30.202 "pending_bdev_io": 0, 00:22:30.202 "completed_nvme_io": 30743, 00:22:30.202 "transports": [ 00:22:30.202 { 00:22:30.202 "trtype": "TCP" 00:22:30.202 } 00:22:30.202 ] 00:22:30.202 }, 00:22:30.202 { 00:22:30.202 "name": "nvmf_tgt_poll_group_002", 00:22:30.202 "admin_qpairs": 0, 00:22:30.202 "io_qpairs": 0, 00:22:30.202 "current_admin_qpairs": 0, 00:22:30.202 "current_io_qpairs": 0, 00:22:30.202 "pending_bdev_io": 0, 00:22:30.202 "completed_nvme_io": 0, 00:22:30.202 "transports": [ 00:22:30.202 { 00:22:30.202 "trtype": "TCP" 00:22:30.202 } 00:22:30.202 ] 00:22:30.202 }, 00:22:30.202 { 00:22:30.202 "name": "nvmf_tgt_poll_group_003", 00:22:30.202 "admin_qpairs": 0, 00:22:30.202 "io_qpairs": 0, 00:22:30.202 "current_admin_qpairs": 0, 00:22:30.202 "current_io_qpairs": 0, 00:22:30.202 "pending_bdev_io": 0, 00:22:30.202 "completed_nvme_io": 0, 00:22:30.202 "transports": [ 00:22:30.202 { 00:22:30.202 "trtype": "TCP" 00:22:30.202 } 00:22:30.202 ] 00:22:30.202 } 00:22:30.202 ] 00:22:30.202 }' 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:30.202 12:11:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2276168 00:22:38.306 Initializing NVMe Controllers 00:22:38.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:38.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:38.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:38.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:38.306 Initialization complete. Launching workers. 00:22:38.306 ======================================================== 00:22:38.306 Latency(us) 00:22:38.306 Device Information : IOPS MiB/s Average min max 00:22:38.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5684.08 22.20 11263.50 1357.32 57622.58 00:22:38.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5547.78 21.67 11536.81 1379.14 58875.99 00:22:38.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15624.14 61.03 4095.92 1282.05 7227.64 00:22:38.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4997.68 19.52 12805.33 1614.83 59355.14 00:22:38.306 ======================================================== 00:22:38.306 Total : 31853.69 124.43 8037.33 1282.05 59355.14 00:22:38.306 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.306 rmmod nvme_tcp 00:22:38.306 rmmod nvme_fabrics 00:22:38.306 rmmod nvme_keyring 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2276033 ']' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2276033 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2276033 ']' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2276033 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2276033 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2276033' 00:22:38.306 killing process with pid 2276033 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2276033 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2276033 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.306 12:11:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.276 12:11:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.276 12:11:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:40.276 00:22:40.276 real 0m51.567s 00:22:40.276 user 2m46.817s 00:22:40.276 sys 0m13.293s 00:22:40.276 12:11:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:40.276 12:11:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.276 ************************************ 00:22:40.276 END TEST nvmf_perf_adq 00:22:40.276 ************************************ 00:22:40.276 12:11:29 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:40.276 12:11:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:40.276 12:11:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:40.276 12:11:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.276 ************************************ 00:22:40.276 START TEST nvmf_shutdown 00:22:40.276 ************************************ 00:22:40.276 12:11:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:40.536 * Looking for test storage... 00:22:40.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.536 ************************************ 00:22:40.536 START TEST nvmf_shutdown_tc1 00:22:40.536 ************************************ 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.536 12:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:47.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:47.104 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:47.104 Found net devices under 0000:af:00.0: cvl_0_0 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:47.104 Found net devices under 0000:af:00.1: cvl_0_1 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:22:47.104 00:22:47.104 --- 10.0.0.2 ping statistics --- 00:22:47.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.104 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:22:47.104 00:22:47.104 --- 10.0.0.1 ping statistics --- 00:22:47.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.104 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.104 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2281574 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2281574 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2281574 ']' 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.105 12:11:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.105 [2024-06-10 12:11:36.416561] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:47.105 [2024-06-10 12:11:36.416613] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.105 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.105 [2024-06-10 12:11:36.490467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.105 [2024-06-10 12:11:36.562154] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.105 [2024-06-10 12:11:36.562195] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.105 [2024-06-10 12:11:36.562204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.105 [2024-06-10 12:11:36.562212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.105 [2024-06-10 12:11:36.562219] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.105 [2024-06-10 12:11:36.562332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.105 [2024-06-10 12:11:36.562436] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.105 [2024-06-10 12:11:36.562547] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.105 [2024-06-10 12:11:36.562548] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.041 [2024-06-10 12:11:37.260368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.041 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.042 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.042 Malloc1 00:22:48.042 [2024-06-10 12:11:37.375272] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.042 Malloc2 00:22:48.042 Malloc3 00:22:48.042 Malloc4 00:22:48.042 Malloc5 00:22:48.301 Malloc6 00:22:48.301 Malloc7 00:22:48.301 Malloc8 00:22:48.301 Malloc9 00:22:48.301 Malloc10 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2281810 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2281810 /var/tmp/bdevperf.sock 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2281810 ']' 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.301 { 00:22:48.301 "params": { 00:22:48.301 "name": "Nvme$subsystem", 00:22:48.301 "trtype": "$TEST_TRANSPORT", 00:22:48.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.301 "adrfam": "ipv4", 00:22:48.301 "trsvcid": "$NVMF_PORT", 00:22:48.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.301 "hdgst": ${hdgst:-false}, 00:22:48.301 "ddgst": ${ddgst:-false} 00:22:48.301 }, 00:22:48.301 "method": "bdev_nvme_attach_controller" 00:22:48.301 } 00:22:48.301 EOF 00:22:48.301 )") 00:22:48.301 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 [2024-06-10 12:11:37.856669] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:48.561 [2024-06-10 12:11:37.856721] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.561 { 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme$subsystem", 00:22:48.561 "trtype": "$TEST_TRANSPORT", 00:22:48.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.561 "adrfam": "ipv4", 00:22:48.561 "trsvcid": "$NVMF_PORT", 00:22:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.561 "hdgst": ${hdgst:-false}, 00:22:48.561 "ddgst": ${ddgst:-false} 00:22:48.561 }, 00:22:48.561 "method": "bdev_nvme_attach_controller" 00:22:48.561 } 00:22:48.561 EOF 00:22:48.561 )") 00:22:48.561 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:48.561 12:11:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:48.561 "params": { 00:22:48.561 "name": "Nvme1", 00:22:48.561 "trtype": "tcp", 00:22:48.561 "traddr": "10.0.0.2", 00:22:48.561 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme2", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme3", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme4", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme5", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme6", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme7", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme8", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme9", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 },{ 00:22:48.562 "params": { 00:22:48.562 "name": "Nvme10", 00:22:48.562 "trtype": "tcp", 00:22:48.562 "traddr": "10.0.0.2", 00:22:48.562 "adrfam": "ipv4", 00:22:48.562 "trsvcid": "4420", 00:22:48.562 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:48.562 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:48.562 "hdgst": false, 00:22:48.562 "ddgst": false 00:22:48.562 }, 00:22:48.562 "method": "bdev_nvme_attach_controller" 00:22:48.562 }' 00:22:48.562 [2024-06-10 12:11:37.929931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.562 [2024-06-10 12:11:37.999706] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2281810 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:50.461 12:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:51.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2281810 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:51.026 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2281574 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 [2024-06-10 12:11:40.524719] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:51.027 [2024-06-10 12:11:40.524772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282346 ] 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.027 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.027 { 00:22:51.027 "params": { 00:22:51.027 "name": "Nvme$subsystem", 00:22:51.027 "trtype": "$TEST_TRANSPORT", 00:22:51.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.027 "adrfam": "ipv4", 00:22:51.027 "trsvcid": "$NVMF_PORT", 00:22:51.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.027 "hdgst": ${hdgst:-false}, 00:22:51.027 "ddgst": ${ddgst:-false} 00:22:51.027 }, 00:22:51.027 "method": "bdev_nvme_attach_controller" 00:22:51.027 } 00:22:51.027 EOF 00:22:51.027 )") 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.285 { 00:22:51.285 "params": { 00:22:51.285 "name": "Nvme$subsystem", 00:22:51.285 "trtype": "$TEST_TRANSPORT", 00:22:51.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.285 "adrfam": "ipv4", 00:22:51.285 "trsvcid": "$NVMF_PORT", 00:22:51.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.285 "hdgst": ${hdgst:-false}, 00:22:51.285 "ddgst": ${ddgst:-false} 00:22:51.285 }, 00:22:51.285 "method": "bdev_nvme_attach_controller" 00:22:51.285 } 00:22:51.285 EOF 00:22:51.285 )") 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.285 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:51.285 12:11:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:51.285 "params": { 00:22:51.285 "name": "Nvme1", 00:22:51.285 "trtype": "tcp", 00:22:51.285 "traddr": "10.0.0.2", 00:22:51.285 "adrfam": "ipv4", 00:22:51.285 "trsvcid": "4420", 00:22:51.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.285 "hdgst": false, 00:22:51.285 "ddgst": false 00:22:51.285 }, 00:22:51.285 "method": "bdev_nvme_attach_controller" 00:22:51.285 },{ 00:22:51.285 "params": { 00:22:51.285 "name": "Nvme2", 00:22:51.285 "trtype": "tcp", 00:22:51.285 "traddr": "10.0.0.2", 00:22:51.285 "adrfam": "ipv4", 00:22:51.285 "trsvcid": "4420", 00:22:51.285 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.285 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.285 "hdgst": false, 00:22:51.285 "ddgst": false 00:22:51.285 }, 00:22:51.285 "method": "bdev_nvme_attach_controller" 00:22:51.285 },{ 00:22:51.285 "params": { 00:22:51.285 "name": "Nvme3", 00:22:51.285 "trtype": "tcp", 00:22:51.285 "traddr": "10.0.0.2", 00:22:51.285 "adrfam": "ipv4", 00:22:51.285 "trsvcid": "4420", 00:22:51.285 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.285 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.285 "hdgst": false, 00:22:51.285 "ddgst": false 00:22:51.285 }, 00:22:51.285 "method": "bdev_nvme_attach_controller" 00:22:51.285 },{ 00:22:51.285 "params": { 00:22:51.285 "name": "Nvme4", 00:22:51.285 "trtype": "tcp", 00:22:51.285 "traddr": "10.0.0.2", 00:22:51.285 "adrfam": "ipv4", 00:22:51.285 "trsvcid": "4420", 00:22:51.285 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.285 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.285 "hdgst": false, 00:22:51.285 "ddgst": false 00:22:51.285 }, 00:22:51.285 "method": "bdev_nvme_attach_controller" 00:22:51.285 },{ 00:22:51.285 "params": { 00:22:51.286 "name": "Nvme5", 00:22:51.286 "trtype": "tcp", 00:22:51.286 "traddr": "10.0.0.2", 00:22:51.286 "adrfam": "ipv4", 00:22:51.286 "trsvcid": "4420", 00:22:51.286 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.286 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.286 "hdgst": false, 00:22:51.286 "ddgst": false 00:22:51.286 }, 00:22:51.286 "method": "bdev_nvme_attach_controller" 00:22:51.286 },{ 00:22:51.286 "params": { 00:22:51.286 "name": "Nvme6", 00:22:51.286 "trtype": "tcp", 00:22:51.286 "traddr": "10.0.0.2", 00:22:51.286 "adrfam": "ipv4", 00:22:51.286 "trsvcid": "4420", 00:22:51.286 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.286 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.286 "hdgst": false, 00:22:51.286 "ddgst": false 00:22:51.286 }, 00:22:51.286 "method": "bdev_nvme_attach_controller" 00:22:51.286 },{ 00:22:51.286 "params": { 00:22:51.286 "name": "Nvme7", 00:22:51.286 "trtype": "tcp", 00:22:51.286 "traddr": "10.0.0.2", 00:22:51.286 "adrfam": "ipv4", 00:22:51.286 "trsvcid": "4420", 00:22:51.286 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.286 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.286 "hdgst": false, 00:22:51.286 "ddgst": false 00:22:51.286 }, 00:22:51.286 "method": "bdev_nvme_attach_controller" 00:22:51.286 },{ 00:22:51.286 "params": { 00:22:51.286 "name": "Nvme8", 00:22:51.286 "trtype": "tcp", 00:22:51.286 "traddr": "10.0.0.2", 00:22:51.286 "adrfam": "ipv4", 00:22:51.286 "trsvcid": "4420", 00:22:51.286 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.286 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.286 "hdgst": false, 00:22:51.286 "ddgst": false 00:22:51.286 }, 00:22:51.286 "method": "bdev_nvme_attach_controller" 00:22:51.286 },{ 00:22:51.286 "params": { 00:22:51.286 "name": "Nvme9", 00:22:51.286 "trtype": "tcp", 00:22:51.286 "traddr": "10.0.0.2", 00:22:51.286 "adrfam": "ipv4", 00:22:51.286 "trsvcid": "4420", 00:22:51.286 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.286 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.286 "hdgst": false, 00:22:51.286 "ddgst": false 00:22:51.286 }, 00:22:51.286 "method": "bdev_nvme_attach_controller" 00:22:51.286 },{ 00:22:51.286 "params": { 00:22:51.286 "name": "Nvme10", 00:22:51.286 "trtype": "tcp", 00:22:51.286 "traddr": "10.0.0.2", 00:22:51.286 "adrfam": "ipv4", 00:22:51.286 "trsvcid": "4420", 00:22:51.286 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.286 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.286 "hdgst": false, 00:22:51.286 "ddgst": false 00:22:51.286 }, 00:22:51.286 "method": "bdev_nvme_attach_controller" 00:22:51.286 }' 00:22:51.286 [2024-06-10 12:11:40.597443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.286 [2024-06-10 12:11:40.667920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.659 Running I/O for 1 seconds... 00:22:54.034 00:22:54.034 Latency(us) 00:22:54.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.034 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme1n1 : 1.12 284.58 17.79 0.00 0.00 223025.73 16986.93 205520.90 00:22:54.034 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme2n1 : 1.12 286.75 17.92 0.00 0.00 218297.96 16148.07 223136.97 00:22:54.034 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme3n1 : 1.12 343.40 21.46 0.00 0.00 178196.96 13369.34 204682.04 00:22:54.034 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme4n1 : 1.10 311.06 19.44 0.00 0.00 189271.38 5138.02 199648.87 00:22:54.034 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme5n1 : 1.13 282.99 17.69 0.00 0.00 212489.67 18140.36 203004.31 00:22:54.034 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme6n1 : 1.13 283.51 17.72 0.00 0.00 209046.57 16882.07 219781.53 00:22:54.034 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme7n1 : 1.11 288.07 18.00 0.00 0.00 202502.14 16252.93 201326.59 00:22:54.034 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme8n1 : 1.12 285.63 17.85 0.00 0.00 201404.91 16672.36 203004.31 00:22:54.034 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme9n1 : 1.13 282.28 17.64 0.00 0.00 200949.60 16672.36 224814.69 00:22:54.034 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.034 Verification LBA range: start 0x0 length 0x400 00:22:54.034 Nvme10n1 : 1.14 281.34 17.58 0.00 0.00 198967.95 16777.22 226492.42 00:22:54.034 =================================================================================================================== 00:22:54.034 Total : 2929.60 183.10 0.00 0.00 202829.42 5138.02 226492.42 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.034 rmmod nvme_tcp 00:22:54.034 rmmod nvme_fabrics 00:22:54.034 rmmod nvme_keyring 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2281574 ']' 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2281574 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 2281574 ']' 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 2281574 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2281574 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2281574' 00:22:54.034 killing process with pid 2281574 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 2281574 00:22:54.034 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 2281574 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.602 12:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.505 12:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.505 00:22:56.505 real 0m16.032s 00:22:56.505 user 0m34.892s 00:22:56.505 sys 0m6.452s 00:22:56.505 12:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:56.505 12:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.505 ************************************ 00:22:56.505 END TEST nvmf_shutdown_tc1 00:22:56.505 ************************************ 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.764 ************************************ 00:22:56.764 START TEST nvmf_shutdown_tc2 00:22:56.764 ************************************ 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:56.764 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:56.764 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:56.764 Found net devices under 0000:af:00.0: cvl_0_0 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:56.764 Found net devices under 0000:af:00.1: cvl_0_1 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.764 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.765 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:22:57.023 00:22:57.023 --- 10.0.0.2 ping statistics --- 00:22:57.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.023 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:22:57.023 00:22:57.023 --- 10.0.0.1 ping statistics --- 00:22:57.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.023 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2283513 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2283513 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2283513 ']' 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:57.023 12:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.023 [2024-06-10 12:11:46.519365] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:57.023 [2024-06-10 12:11:46.519410] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.281 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.281 [2024-06-10 12:11:46.595070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.281 [2024-06-10 12:11:46.668913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.281 [2024-06-10 12:11:46.668956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.281 [2024-06-10 12:11:46.668966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.281 [2024-06-10 12:11:46.668975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.281 [2024-06-10 12:11:46.668982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.281 [2024-06-10 12:11:46.669037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.281 [2024-06-10 12:11:46.669111] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.281 [2024-06-10 12:11:46.669230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.281 [2024-06-10 12:11:46.669231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.846 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.846 [2024-06-10 12:11:47.363121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.104 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.104 Malloc1 00:22:58.104 [2024-06-10 12:11:47.474022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.104 Malloc2 00:22:58.104 Malloc3 00:22:58.104 Malloc4 00:22:58.104 Malloc5 00:22:58.362 Malloc6 00:22:58.362 Malloc7 00:22:58.362 Malloc8 00:22:58.362 Malloc9 00:22:58.362 Malloc10 00:22:58.362 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.362 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:58.362 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:58.362 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2283820 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2283820 /var/tmp/bdevperf.sock 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2283820 ']' 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 [2024-06-10 12:11:47.958804] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:22:58.621 [2024-06-10 12:11:47.958854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283820 ] 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.621 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.621 { 00:22:58.621 "params": { 00:22:58.621 "name": "Nvme$subsystem", 00:22:58.621 "trtype": "$TEST_TRANSPORT", 00:22:58.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.621 "adrfam": "ipv4", 00:22:58.621 "trsvcid": "$NVMF_PORT", 00:22:58.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.621 "hdgst": ${hdgst:-false}, 00:22:58.621 "ddgst": ${ddgst:-false} 00:22:58.621 }, 00:22:58.621 "method": "bdev_nvme_attach_controller" 00:22:58.621 } 00:22:58.621 EOF 00:22:58.621 )") 00:22:58.622 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.622 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:58.622 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:58.622 { 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme$subsystem", 00:22:58.622 "trtype": "$TEST_TRANSPORT", 00:22:58.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "$NVMF_PORT", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.622 "hdgst": ${hdgst:-false}, 00:22:58.622 "ddgst": ${ddgst:-false} 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 } 00:22:58.622 EOF 00:22:58.622 )") 00:22:58.622 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.622 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:58.622 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:58.622 12:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:58.622 12:11:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme1", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme2", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme3", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme4", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme5", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme6", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme7", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme8", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme9", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 },{ 00:22:58.622 "params": { 00:22:58.622 "name": "Nvme10", 00:22:58.622 "trtype": "tcp", 00:22:58.622 "traddr": "10.0.0.2", 00:22:58.622 "adrfam": "ipv4", 00:22:58.622 "trsvcid": "4420", 00:22:58.622 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:58.622 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:58.622 "hdgst": false, 00:22:58.622 "ddgst": false 00:22:58.622 }, 00:22:58.622 "method": "bdev_nvme_attach_controller" 00:22:58.622 }' 00:22:58.622 [2024-06-10 12:11:48.031183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.622 [2024-06-10 12:11:48.099746] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.521 Running I/O for 10 seconds... 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:00.521 12:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:00.779 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2283820 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2283820 ']' 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2283820 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2283820 00:23:01.036 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:01.037 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:01.037 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2283820' 00:23:01.037 killing process with pid 2283820 00:23:01.037 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2283820 00:23:01.037 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2283820 00:23:01.293 Received shutdown signal, test time was about 0.920799 seconds 00:23:01.293 00:23:01.293 Latency(us) 00:23:01.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.293 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme1n1 : 0.90 282.99 17.69 0.00 0.00 223944.50 16986.93 221459.25 00:23:01.293 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme2n1 : 0.89 286.90 17.93 0.00 0.00 217079.40 15938.36 205520.90 00:23:01.293 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme3n1 : 0.89 289.04 18.06 0.00 0.00 211678.62 13421.77 189582.54 00:23:01.293 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme4n1 : 0.92 348.72 21.80 0.00 0.00 172239.18 11377.05 209715.20 00:23:01.293 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme5n1 : 0.91 286.16 17.89 0.00 0.00 206091.79 3106.41 202165.45 00:23:01.293 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme6n1 : 0.91 280.95 17.56 0.00 0.00 206977.43 18140.36 209715.20 00:23:01.293 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme7n1 : 0.90 284.21 17.76 0.00 0.00 200591.16 13841.20 201326.59 00:23:01.293 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme8n1 : 0.89 286.61 17.91 0.00 0.00 194537.27 14365.49 207198.62 00:23:01.293 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme9n1 : 0.91 279.87 17.49 0.00 0.00 196705.69 15833.50 218103.81 00:23:01.293 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.293 Verification LBA range: start 0x0 length 0x400 00:23:01.293 Nvme10n1 : 0.92 278.21 17.39 0.00 0.00 194260.17 16777.22 229847.86 00:23:01.293 =================================================================================================================== 00:23:01.293 Total : 2903.68 181.48 0.00 0.00 201683.03 3106.41 229847.86 00:23:01.293 12:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2283513 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.668 rmmod nvme_tcp 00:23:02.668 rmmod nvme_fabrics 00:23:02.668 rmmod nvme_keyring 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2283513 ']' 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2283513 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2283513 ']' 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2283513 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2283513 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2283513' 00:23:02.668 killing process with pid 2283513 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2283513 00:23:02.668 12:11:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2283513 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.927 12:11:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.461 00:23:05.461 real 0m8.300s 00:23:05.461 user 0m25.113s 00:23:05.461 sys 0m1.653s 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:05.461 ************************************ 00:23:05.461 END TEST nvmf_shutdown_tc2 00:23:05.461 ************************************ 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.461 ************************************ 00:23:05.461 START TEST nvmf_shutdown_tc3 00:23:05.461 ************************************ 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:05.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:05.461 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:05.461 Found net devices under 0000:af:00.0: cvl_0_0 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:05.461 Found net devices under 0000:af:00.1: cvl_0_1 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.461 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:23:05.462 00:23:05.462 --- 10.0.0.2 ping statistics --- 00:23:05.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.462 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:23:05.462 00:23:05.462 --- 10.0.0.1 ping statistics --- 00:23:05.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.462 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2285024 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2285024 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2285024 ']' 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:05.462 12:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.462 [2024-06-10 12:11:54.905569] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:05.462 [2024-06-10 12:11:54.905617] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.719 [2024-06-10 12:11:54.980035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.719 [2024-06-10 12:11:55.052936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.719 [2024-06-10 12:11:55.052975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.719 [2024-06-10 12:11:55.052985] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.719 [2024-06-10 12:11:55.052993] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.719 [2024-06-10 12:11:55.053016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.719 [2024-06-10 12:11:55.053115] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.719 [2024-06-10 12:11:55.053207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.719 [2024-06-10 12:11:55.053296] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.719 [2024-06-10 12:11:55.053297] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.285 [2024-06-10 12:11:55.761211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.285 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.589 12:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.589 Malloc1 00:23:06.589 [2024-06-10 12:11:55.872089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.589 Malloc2 00:23:06.589 Malloc3 00:23:06.589 Malloc4 00:23:06.589 Malloc5 00:23:06.589 Malloc6 00:23:06.589 Malloc7 00:23:06.897 Malloc8 00:23:06.897 Malloc9 00:23:06.897 Malloc10 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2285340 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2285340 /var/tmp/bdevperf.sock 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2285340 ']' 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 [2024-06-10 12:11:56.352149] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:06.897 [2024-06-10 12:11:56.352199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285340 ] 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.897 { 00:23:06.897 "params": { 00:23:06.897 "name": "Nvme$subsystem", 00:23:06.897 "trtype": "$TEST_TRANSPORT", 00:23:06.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.897 "adrfam": "ipv4", 00:23:06.897 "trsvcid": "$NVMF_PORT", 00:23:06.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.897 "hdgst": ${hdgst:-false}, 00:23:06.897 "ddgst": ${ddgst:-false} 00:23:06.897 }, 00:23:06.897 "method": "bdev_nvme_attach_controller" 00:23:06.897 } 00:23:06.897 EOF 00:23:06.897 )") 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.897 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.898 { 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme$subsystem", 00:23:06.898 "trtype": "$TEST_TRANSPORT", 00:23:06.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "$NVMF_PORT", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.898 "hdgst": ${hdgst:-false}, 00:23:06.898 "ddgst": ${ddgst:-false} 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 } 00:23:06.898 EOF 00:23:06.898 )") 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.898 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.898 { 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme$subsystem", 00:23:06.898 "trtype": "$TEST_TRANSPORT", 00:23:06.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "$NVMF_PORT", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.898 "hdgst": ${hdgst:-false}, 00:23:06.898 "ddgst": ${ddgst:-false} 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 } 00:23:06.898 EOF 00:23:06.898 )") 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:06.898 12:11:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme1", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme2", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme3", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme4", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme5", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme6", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme7", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme8", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme9", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 },{ 00:23:06.898 "params": { 00:23:06.898 "name": "Nvme10", 00:23:06.898 "trtype": "tcp", 00:23:06.898 "traddr": "10.0.0.2", 00:23:06.898 "adrfam": "ipv4", 00:23:06.898 "trsvcid": "4420", 00:23:06.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:06.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:06.898 "hdgst": false, 00:23:06.898 "ddgst": false 00:23:06.898 }, 00:23:06.898 "method": "bdev_nvme_attach_controller" 00:23:06.898 }' 00:23:07.156 [2024-06-10 12:11:56.424408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.156 [2024-06-10 12:11:56.493274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.528 Running I/O for 10 seconds... 00:23:08.528 12:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:08.528 12:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:08.528 12:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:08.528 12:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.528 12:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.785 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:08.786 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:09.043 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2285024 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 2285024 ']' 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 2285024 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2285024 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2285024' 00:23:09.307 killing process with pid 2285024 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 2285024 00:23:09.307 12:11:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 2285024 00:23:09.307 [2024-06-10 12:11:58.792754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792860] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.307 [2024-06-10 12:11:58.792962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.792972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.792980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.792989] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.792998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793006] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793164] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.793355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619340 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.795021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61bc10 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.795051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61bc10 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.795061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61bc10 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797497] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797515] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.308 [2024-06-10 12:11:58.797532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797541] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797593] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797602] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797620] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797646] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797751] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797768] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.797776] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619c80 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.798995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799244] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799253] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799270] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799313] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799375] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799393] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.309 [2024-06-10 12:11:58.799427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.799492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a140 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800342] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800351] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800362] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800430] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800447] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800456] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800539] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800565] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800626] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800635] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800764] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800772] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.800815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61a5e0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801768] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801786] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801795] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.310 [2024-06-10 12:11:58.801878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.801999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802018] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802027] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802228] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802246] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.802299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61aaa0 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803484] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803501] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803509] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803552] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803564] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803573] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803598] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803651] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.311 [2024-06-10 12:11:58.803677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803728] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803745] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.803918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61ae10 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804719] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804728] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804736] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804745] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804951] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.804995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805065] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.312 [2024-06-10 12:11:58.805082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805244] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b2b0 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.805991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806000] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806232] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806241] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806362] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.806370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b750 is same with the state(5) to be set 00:23:09.313 [2024-06-10 12:11:58.807457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.807986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.807996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.314 [2024-06-10 12:11:58.808270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.314 [2024-06-10 12:11:58.808279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.315 [2024-06-10 12:11:58.808757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.808788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:09.315 [2024-06-10 12:11:58.809180] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f0cce0 was disconnected and freed. reset controller. 00:23:09.315 [2024-06-10 12:11:58.809232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab3e0 is same with the state(5) to be set 00:23:09.315 [2024-06-10 12:11:58.809336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5610 is same with the state(5) to be set 00:23:09.315 [2024-06-10 12:11:58.809440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05b50 is same with the state(5) to be set 00:23:09.315 [2024-06-10 12:11:58.809550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.315 [2024-06-10 12:11:58.809570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.315 [2024-06-10 12:11:58.809578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e06bc0 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.809650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c8a0 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.809750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2a50 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.809849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82990 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.809949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.809988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.809999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04490 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.810051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa010 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.810152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.316 [2024-06-10 12:11:58.810218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa39b0 is same with the state(5) to be set 00:23:09.316 [2024-06-10 12:11:58.810330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.316 [2024-06-10 12:11:58.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.316 [2024-06-10 12:11:58.810370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.316 [2024-06-10 12:11:58.810392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.316 [2024-06-10 12:11:58.810412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.316 [2024-06-10 12:11:58.810431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.316 [2024-06-10 12:11:58.810442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.317 [2024-06-10 12:11:58.810912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.317 [2024-06-10 12:11:58.810923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.587 [2024-06-10 12:11:58.823592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.587 [2024-06-10 12:11:58.823606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.823977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.823989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.824003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.824015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.824029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.824041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.824055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.824068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.824082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.824094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.824108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.824120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.824135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91f60 is same with the state(5) to be set 00:23:09.588 [2024-06-10 12:11:58.824206] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f91f60 was disconnected and freed. reset controller. 00:23:09.588 [2024-06-10 12:11:58.825846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab3e0 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.825888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e5610 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.825914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e05b50 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.825938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e06bc0 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.825957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0c8a0 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.825979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2a50 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.826000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e82990 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.826019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04490 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.826042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaa010 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.826062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa39b0 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.827659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:09.588 [2024-06-10 12:11:58.827692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:09.588 [2024-06-10 12:11:58.828577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.588 [2024-06-10 12:11:58.828608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e82990 with addr=10.0.0.2, port=4420 00:23:09.588 [2024-06-10 12:11:58.828623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82990 is same with the state(5) to be set 00:23:09.588 [2024-06-10 12:11:58.828749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.588 [2024-06-10 12:11:58.828765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e06bc0 with addr=10.0.0.2, port=4420 00:23:09.588 [2024-06-10 12:11:58.828778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e06bc0 is same with the state(5) to be set 00:23:09.588 [2024-06-10 12:11:58.829156] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829214] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829269] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829320] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829373] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829425] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829485] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e82990 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.829537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e06bc0 (9): Bad file descriptor 00:23:09.588 [2024-06-10 12:11:58.829587] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:09.588 [2024-06-10 12:11:58.829702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:09.588 [2024-06-10 12:11:58.829722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:09.588 [2024-06-10 12:11:58.829736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:09.588 [2024-06-10 12:11:58.829757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:09.588 [2024-06-10 12:11:58.829769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:09.588 [2024-06-10 12:11:58.829780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:09.588 [2024-06-10 12:11:58.829858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.588 [2024-06-10 12:11:58.829872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.588 [2024-06-10 12:11:58.836005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.836026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.836044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.836056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.836069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.836080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.588 [2024-06-10 12:11:58.836094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.588 [2024-06-10 12:11:58.836105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.836985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.836997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.837010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.837021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.837034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.837045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.589 [2024-06-10 12:11:58.837058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.589 [2024-06-10 12:11:58.837069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.837597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.837609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f145f0 is same with the state(5) to be set 00:23:09.590 [2024-06-10 12:11:58.838805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.838985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.838996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.590 [2024-06-10 12:11:58.839244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.590 [2024-06-10 12:11:58.839257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.839977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.839988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.591 [2024-06-10 12:11:58.840183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.591 [2024-06-10 12:11:58.840196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.840379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.840391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93430 is same with the state(5) to be set 00:23:09.592 [2024-06-10 12:11:58.841586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.841985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.841996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.842006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.842015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.842025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.842035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.842045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.842054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.842065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.842074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.842085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.592 [2024-06-10 12:11:58.842094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.592 [2024-06-10 12:11:58.842105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.593 [2024-06-10 12:11:58.842882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.593 [2024-06-10 12:11:58.842891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.842901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f94990 is same with the state(5) to be set 00:23:09.594 [2024-06-10 12:11:58.843862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.843877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.843890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.843899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.843919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.843939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.843952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.843962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.843972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.843981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.843992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.594 [2024-06-10 12:11:58.844639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.594 [2024-06-10 12:11:58.844650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.844988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.844997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.845134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.845144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f95e60 is same with the state(5) to be set 00:23:09.595 [2024-06-10 12:11:58.846107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.595 [2024-06-10 12:11:58.846391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.595 [2024-06-10 12:11:58.846402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.846989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.846999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.847009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.847019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.847028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.847039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.847048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.847059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.847068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.847078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.596 [2024-06-10 12:11:58.847088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.596 [2024-06-10 12:11:58.847098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.847387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.847397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddaad0 is same with the state(5) to be set 00:23:09.597 [2024-06-10 12:11:58.848372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.597 [2024-06-10 12:11:58.848871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.597 [2024-06-10 12:11:58.848882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.848892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.848903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.848922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.848931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.848942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.848951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.848961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.848970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.848980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.848989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.598 [2024-06-10 12:11:58.849643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.598 [2024-06-10 12:11:58.849653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddbff0 is same with the state(5) to be set 00:23:09.598 [2024-06-10 12:11:58.850609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.850985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.850996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.599 [2024-06-10 12:11:58.851409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.599 [2024-06-10 12:11:58.851419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.851894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.851904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddd510 is same with the state(5) to be set 00:23:09.600 [2024-06-10 12:11:58.852878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.852891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.852904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.852914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.852925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.852934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.852945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.852954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.852965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.852974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.852985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.852994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.600 [2024-06-10 12:11:58.853159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.600 [2024-06-10 12:11:58.853170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.601 [2024-06-10 12:11:58.853894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.601 [2024-06-10 12:11:58.853904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.853914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.853923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.853934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.853943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.853953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.853962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.853973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.853982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.853993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.602 [2024-06-10 12:11:58.854163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.602 [2024-06-10 12:11:58.854173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dde8b0 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.855372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.602 [2024-06-10 12:11:58.855394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:09.602 [2024-06-10 12:11:58.855406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:09.602 [2024-06-10 12:11:58.855417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:09.602 [2024-06-10 12:11:58.855495] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.602 [2024-06-10 12:11:58.855513] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.602 [2024-06-10 12:11:58.855530] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.602 [2024-06-10 12:11:58.855543] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.602 [2024-06-10 12:11:58.855619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:09.602 [2024-06-10 12:11:58.855631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:09.602 [2024-06-10 12:11:58.855641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:09.602 task offset: 28160 on job bdev=Nvme10n1 fails 00:23:09.602 00:23:09.602 Latency(us) 00:23:09.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.602 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme1n1 ended in about 0.90 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme1n1 : 0.90 212.49 13.28 70.83 0.00 223766.94 22439.53 198810.01 00:23:09.602 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme2n1 ended in about 0.89 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme2n1 : 0.89 215.23 13.45 71.74 0.00 217151.08 18245.22 224814.69 00:23:09.602 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme3n1 ended in about 0.91 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme3n1 : 0.91 211.84 13.24 70.61 0.00 216991.33 14994.64 208037.48 00:23:09.602 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme4n1 ended in about 0.91 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme4n1 : 0.91 211.27 13.20 70.42 0.00 213895.58 16252.93 205520.90 00:23:09.602 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme5n1 ended in about 0.91 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme5n1 : 0.91 220.64 13.79 70.25 0.00 203574.11 15728.64 231525.58 00:23:09.602 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme6n1 ended in about 0.91 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme6n1 : 0.91 210.24 13.14 70.08 0.00 207591.01 16357.79 201326.59 00:23:09.602 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme7n1 ended in about 0.92 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme7n1 : 0.92 285.09 17.82 69.91 0.00 160973.92 11010.05 201326.59 00:23:09.602 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme8n1 ended in about 0.92 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme8n1 : 0.92 209.21 13.08 69.74 0.00 201286.25 13631.49 223136.97 00:23:09.602 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme9n1 ended in about 0.92 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme9n1 : 0.92 208.69 13.04 69.56 0.00 198114.10 17196.65 208876.34 00:23:09.602 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.602 Job: Nvme10n1 ended in about 0.89 seconds with error 00:23:09.602 Verification LBA range: start 0x0 length 0x400 00:23:09.602 Nvme10n1 : 0.89 215.58 13.47 71.86 0.00 187034.83 17511.22 213909.50 00:23:09.602 =================================================================================================================== 00:23:09.602 Total : 2200.28 137.52 705.01 0.00 201939.51 11010.05 231525.58 00:23:09.602 [2024-06-10 12:11:58.876261] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:09.602 [2024-06-10 12:11:58.876298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:09.602 [2024-06-10 12:11:58.876637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.602 [2024-06-10 12:11:58.876657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de2a50 with addr=10.0.0.2, port=4420 00:23:09.602 [2024-06-10 12:11:58.876669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2a50 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.876789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.602 [2024-06-10 12:11:58.876801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0c8a0 with addr=10.0.0.2, port=4420 00:23:09.602 [2024-06-10 12:11:58.876810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c8a0 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.877024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.602 [2024-06-10 12:11:58.877036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04490 with addr=10.0.0.2, port=4420 00:23:09.602 [2024-06-10 12:11:58.877045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04490 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.877205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.602 [2024-06-10 12:11:58.877217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e05b50 with addr=10.0.0.2, port=4420 00:23:09.602 [2024-06-10 12:11:58.877231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e05b50 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.879191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.602 [2024-06-10 12:11:58.879211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab3e0 with addr=10.0.0.2, port=4420 00:23:09.602 [2024-06-10 12:11:58.879221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab3e0 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.879322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.602 [2024-06-10 12:11:58.879334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e5610 with addr=10.0.0.2, port=4420 00:23:09.602 [2024-06-10 12:11:58.879343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5610 is same with the state(5) to be set 00:23:09.602 [2024-06-10 12:11:58.879561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.603 [2024-06-10 12:11:58.879573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaa010 with addr=10.0.0.2, port=4420 00:23:09.603 [2024-06-10 12:11:58.879582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaa010 is same with the state(5) to be set 00:23:09.603 [2024-06-10 12:11:58.879667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.603 [2024-06-10 12:11:58.879678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa39b0 with addr=10.0.0.2, port=4420 00:23:09.603 [2024-06-10 12:11:58.879688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa39b0 is same with the state(5) to be set 00:23:09.603 [2024-06-10 12:11:58.879703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2a50 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0c8a0 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04490 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e05b50 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879769] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.603 [2024-06-10 12:11:58.879783] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.603 [2024-06-10 12:11:58.879798] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.603 [2024-06-10 12:11:58.879811] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.603 [2024-06-10 12:11:58.879823] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.603 [2024-06-10 12:11:58.879834] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:09.603 [2024-06-10 12:11:58.879902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:09.603 [2024-06-10 12:11:58.879914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:09.603 [2024-06-10 12:11:58.879950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab3e0 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e5610 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaa010 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa39b0 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.879998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.603 [2024-06-10 12:11:58.880502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e06bc0 with addr=10.0.0.2, port=4420 00:23:09.603 [2024-06-10 12:11:58.880511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e06bc0 is same with the state(5) to be set 00:23:09.603 [2024-06-10 12:11:58.880608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.603 [2024-06-10 12:11:58.880621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e82990 with addr=10.0.0.2, port=4420 00:23:09.603 [2024-06-10 12:11:58.880631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82990 is same with the state(5) to be set 00:23:09.603 [2024-06-10 12:11:58.880639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e06bc0 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.880811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e82990 (9): Bad file descriptor 00:23:09.603 [2024-06-10 12:11:58.880838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:09.603 [2024-06-10 12:11:58.880874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:09.603 [2024-06-10 12:11:58.880882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:09.603 [2024-06-10 12:11:58.880906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.603 [2024-06-10 12:11:58.880914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.862 12:11:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:09.862 12:11:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2285340 00:23:10.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2285340) - No such process 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.798 rmmod nvme_tcp 00:23:10.798 rmmod nvme_fabrics 00:23:10.798 rmmod nvme_keyring 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.798 12:12:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.331 12:12:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.331 00:23:13.331 real 0m7.896s 00:23:13.331 user 0m19.042s 00:23:13.331 sys 0m1.609s 00:23:13.331 12:12:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:13.331 12:12:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:13.331 ************************************ 00:23:13.331 END TEST nvmf_shutdown_tc3 00:23:13.331 ************************************ 00:23:13.331 12:12:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:13.331 00:23:13.331 real 0m32.630s 00:23:13.331 user 1m19.195s 00:23:13.331 sys 0m9.998s 00:23:13.331 12:12:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:13.331 12:12:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:13.331 ************************************ 00:23:13.331 END TEST nvmf_shutdown 00:23:13.331 ************************************ 00:23:13.331 12:12:02 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.331 12:12:02 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.331 12:12:02 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:13.331 12:12:02 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:13.331 12:12:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.331 ************************************ 00:23:13.331 START TEST nvmf_multicontroller 00:23:13.331 ************************************ 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:13.331 * Looking for test storage... 00:23:13.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.331 12:12:02 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.332 12:12:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:19.894 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:19.894 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:19.894 Found net devices under 0000:af:00.0: cvl_0_0 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:19.894 Found net devices under 0000:af:00.1: cvl_0_1 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.894 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.895 12:12:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:23:19.895 00:23:19.895 --- 10.0.0.2 ping statistics --- 00:23:19.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.895 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:23:19.895 00:23:19.895 --- 10.0.0.1 ping statistics --- 00:23:19.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.895 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2290193 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2290193 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2290193 ']' 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:19.895 12:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.895 [2024-06-10 12:12:09.351378] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:19.895 [2024-06-10 12:12:09.351424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.895 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.153 [2024-06-10 12:12:09.426369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:20.153 [2024-06-10 12:12:09.498719] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.153 [2024-06-10 12:12:09.498758] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.153 [2024-06-10 12:12:09.498767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.153 [2024-06-10 12:12:09.498775] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.153 [2024-06-10 12:12:09.498798] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.153 [2024-06-10 12:12:09.498900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.153 [2024-06-10 12:12:09.498990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.153 [2024-06-10 12:12:09.498992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.718 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:20.718 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:20.718 12:12:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.718 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:20.718 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 [2024-06-10 12:12:10.289858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 Malloc0 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 [2024-06-10 12:12:10.354549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 [2024-06-10 12:12:10.362555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 Malloc1 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2290471 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2290471 /var/tmp/bdevperf.sock 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2290471 ']' 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:20.976 12:12:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.911 NVMe0n1 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.911 1 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.911 request: 00:23:21.911 { 00:23:21.911 "name": "NVMe0", 00:23:21.911 "trtype": "tcp", 00:23:21.911 "traddr": "10.0.0.2", 00:23:21.911 "adrfam": "ipv4", 00:23:21.911 "trsvcid": "4420", 00:23:21.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.911 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:21.911 "hostaddr": "10.0.0.2", 00:23:21.911 "hostsvcid": "60000", 00:23:21.911 "prchk_reftag": false, 00:23:21.911 "prchk_guard": false, 00:23:21.911 "hdgst": false, 00:23:21.911 "ddgst": false, 00:23:21.911 "method": "bdev_nvme_attach_controller", 00:23:21.911 "req_id": 1 00:23:21.911 } 00:23:21.911 Got JSON-RPC error response 00:23:21.911 response: 00:23:21.911 { 00:23:21.911 "code": -114, 00:23:21.911 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:21.911 } 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:21.911 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.912 request: 00:23:21.912 { 00:23:21.912 "name": "NVMe0", 00:23:21.912 "trtype": "tcp", 00:23:21.912 "traddr": "10.0.0.2", 00:23:21.912 "adrfam": "ipv4", 00:23:21.912 "trsvcid": "4420", 00:23:21.912 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.912 "hostaddr": "10.0.0.2", 00:23:21.912 "hostsvcid": "60000", 00:23:21.912 "prchk_reftag": false, 00:23:21.912 "prchk_guard": false, 00:23:21.912 "hdgst": false, 00:23:21.912 "ddgst": false, 00:23:21.912 "method": "bdev_nvme_attach_controller", 00:23:21.912 "req_id": 1 00:23:21.912 } 00:23:21.912 Got JSON-RPC error response 00:23:21.912 response: 00:23:21.912 { 00:23:21.912 "code": -114, 00:23:21.912 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:21.912 } 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.912 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.170 request: 00:23:22.170 { 00:23:22.170 "name": "NVMe0", 00:23:22.170 "trtype": "tcp", 00:23:22.170 "traddr": "10.0.0.2", 00:23:22.170 "adrfam": "ipv4", 00:23:22.170 "trsvcid": "4420", 00:23:22.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.170 "hostaddr": "10.0.0.2", 00:23:22.170 "hostsvcid": "60000", 00:23:22.170 "prchk_reftag": false, 00:23:22.170 "prchk_guard": false, 00:23:22.170 "hdgst": false, 00:23:22.170 "ddgst": false, 00:23:22.170 "multipath": "disable", 00:23:22.170 "method": "bdev_nvme_attach_controller", 00:23:22.170 "req_id": 1 00:23:22.170 } 00:23:22.170 Got JSON-RPC error response 00:23:22.170 response: 00:23:22.170 { 00:23:22.170 "code": -114, 00:23:22.170 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:22.170 } 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.170 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.171 request: 00:23:22.171 { 00:23:22.171 "name": "NVMe0", 00:23:22.171 "trtype": "tcp", 00:23:22.171 "traddr": "10.0.0.2", 00:23:22.171 "adrfam": "ipv4", 00:23:22.171 "trsvcid": "4420", 00:23:22.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.171 "hostaddr": "10.0.0.2", 00:23:22.171 "hostsvcid": "60000", 00:23:22.171 "prchk_reftag": false, 00:23:22.171 "prchk_guard": false, 00:23:22.171 "hdgst": false, 00:23:22.171 "ddgst": false, 00:23:22.171 "multipath": "failover", 00:23:22.171 "method": "bdev_nvme_attach_controller", 00:23:22.171 "req_id": 1 00:23:22.171 } 00:23:22.171 Got JSON-RPC error response 00:23:22.171 response: 00:23:22.171 { 00:23:22.171 "code": -114, 00:23:22.171 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:22.171 } 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.171 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.171 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.429 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:22.429 12:12:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.804 0 00:23:23.804 12:12:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:23.804 12:12:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.804 12:12:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2290471 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2290471 ']' 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2290471 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2290471 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2290471' 00:23:23.804 killing process with pid 2290471 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2290471 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2290471 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:23:23.804 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:23.804 [2024-06-10 12:12:10.469487] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:23.804 [2024-06-10 12:12:10.469540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290471 ] 00:23:23.804 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.804 [2024-06-10 12:12:10.539865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.804 [2024-06-10 12:12:10.609638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.804 [2024-06-10 12:12:11.849503] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 75f8066f-2c56-4ac6-8477-8e4d3c7da173 already exists 00:23:23.804 [2024-06-10 12:12:11.849536] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:75f8066f-2c56-4ac6-8477-8e4d3c7da173 alias for bdev NVMe1n1 00:23:23.804 [2024-06-10 12:12:11.849548] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:23.804 Running I/O for 1 seconds... 00:23:23.804 00:23:23.804 Latency(us) 00:23:23.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.804 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:23.804 NVMe0n1 : 1.00 24617.30 96.16 0.00 0.00 5183.66 4666.16 12111.05 00:23:23.804 =================================================================================================================== 00:23:23.804 Total : 24617.30 96.16 0.00 0.00 5183.66 4666.16 12111.05 00:23:23.804 Received shutdown signal, test time was about 1.000000 seconds 00:23:23.804 00:23:23.804 Latency(us) 00:23:23.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.804 =================================================================================================================== 00:23:23.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.804 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.804 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.804 rmmod nvme_tcp 00:23:23.804 rmmod nvme_fabrics 00:23:24.063 rmmod nvme_keyring 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2290193 ']' 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2290193 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2290193 ']' 00:23:24.063 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2290193 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2290193 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2290193' 00:23:24.064 killing process with pid 2290193 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2290193 00:23:24.064 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2290193 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.323 12:12:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.228 12:12:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.228 00:23:26.228 real 0m13.133s 00:23:26.228 user 0m17.132s 00:23:26.228 sys 0m6.040s 00:23:26.228 12:12:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:26.228 12:12:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.228 ************************************ 00:23:26.228 END TEST nvmf_multicontroller 00:23:26.228 ************************************ 00:23:26.228 12:12:15 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:26.228 12:12:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:26.228 12:12:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:26.228 12:12:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.486 ************************************ 00:23:26.486 START TEST nvmf_aer 00:23:26.486 ************************************ 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:26.486 * Looking for test storage... 00:23:26.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.486 12:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.050 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:33.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:33.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:33.051 Found net devices under 0000:af:00.0: cvl_0_0 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:33.051 Found net devices under 0000:af:00.1: cvl_0_1 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.051 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:23:33.309 00:23:33.309 --- 10.0.0.2 ping statistics --- 00:23:33.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.309 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:23:33.309 00:23:33.309 --- 10.0.0.1 ping statistics --- 00:23:33.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.309 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.309 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2294649 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2294649 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 2294649 ']' 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:33.567 12:12:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.567 [2024-06-10 12:12:22.899363] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:33.568 [2024-06-10 12:12:22.899409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.568 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.568 [2024-06-10 12:12:22.972638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.568 [2024-06-10 12:12:23.042648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.568 [2024-06-10 12:12:23.042688] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.568 [2024-06-10 12:12:23.042697] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.568 [2024-06-10 12:12:23.042706] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.568 [2024-06-10 12:12:23.042729] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.568 [2024-06-10 12:12:23.042822] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.568 [2024-06-10 12:12:23.042905] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.568 [2024-06-10 12:12:23.042989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.568 [2024-06-10 12:12:23.042991] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 [2024-06-10 12:12:23.755275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 Malloc0 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 [2024-06-10 12:12:23.810068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 [ 00:23:34.503 { 00:23:34.503 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:34.503 "subtype": "Discovery", 00:23:34.503 "listen_addresses": [], 00:23:34.503 "allow_any_host": true, 00:23:34.503 "hosts": [] 00:23:34.503 }, 00:23:34.503 { 00:23:34.503 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.503 "subtype": "NVMe", 00:23:34.503 "listen_addresses": [ 00:23:34.503 { 00:23:34.503 "trtype": "TCP", 00:23:34.503 "adrfam": "IPv4", 00:23:34.503 "traddr": "10.0.0.2", 00:23:34.503 "trsvcid": "4420" 00:23:34.503 } 00:23:34.503 ], 00:23:34.503 "allow_any_host": true, 00:23:34.503 "hosts": [], 00:23:34.503 "serial_number": "SPDK00000000000001", 00:23:34.503 "model_number": "SPDK bdev Controller", 00:23:34.503 "max_namespaces": 2, 00:23:34.503 "min_cntlid": 1, 00:23:34.503 "max_cntlid": 65519, 00:23:34.503 "namespaces": [ 00:23:34.503 { 00:23:34.503 "nsid": 1, 00:23:34.503 "bdev_name": "Malloc0", 00:23:34.503 "name": "Malloc0", 00:23:34.503 "nguid": "FB592367224645F78839DB2DB21D26C5", 00:23:34.503 "uuid": "fb592367-2246-45f7-8839-db2db21d26c5" 00:23:34.503 } 00:23:34.503 ] 00:23:34.503 } 00:23:34.503 ] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2294746 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:34.503 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:23:34.503 12:12:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.764 Malloc1 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.764 Asynchronous Event Request test 00:23:34.764 Attaching to 10.0.0.2 00:23:34.764 Attached to 10.0.0.2 00:23:34.764 Registering asynchronous event callbacks... 00:23:34.764 Starting namespace attribute notice tests for all controllers... 00:23:34.764 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:34.764 aer_cb - Changed Namespace 00:23:34.764 Cleaning up... 00:23:34.764 [ 00:23:34.764 { 00:23:34.764 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:34.764 "subtype": "Discovery", 00:23:34.764 "listen_addresses": [], 00:23:34.764 "allow_any_host": true, 00:23:34.764 "hosts": [] 00:23:34.764 }, 00:23:34.764 { 00:23:34.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.764 "subtype": "NVMe", 00:23:34.764 "listen_addresses": [ 00:23:34.764 { 00:23:34.764 "trtype": "TCP", 00:23:34.764 "adrfam": "IPv4", 00:23:34.764 "traddr": "10.0.0.2", 00:23:34.764 "trsvcid": "4420" 00:23:34.764 } 00:23:34.764 ], 00:23:34.764 "allow_any_host": true, 00:23:34.764 "hosts": [], 00:23:34.764 "serial_number": "SPDK00000000000001", 00:23:34.764 "model_number": "SPDK bdev Controller", 00:23:34.764 "max_namespaces": 2, 00:23:34.764 "min_cntlid": 1, 00:23:34.764 "max_cntlid": 65519, 00:23:34.764 "namespaces": [ 00:23:34.764 { 00:23:34.764 "nsid": 1, 00:23:34.764 "bdev_name": "Malloc0", 00:23:34.764 "name": "Malloc0", 00:23:34.764 "nguid": "FB592367224645F78839DB2DB21D26C5", 00:23:34.764 "uuid": "fb592367-2246-45f7-8839-db2db21d26c5" 00:23:34.764 }, 00:23:34.764 { 00:23:34.764 "nsid": 2, 00:23:34.764 "bdev_name": "Malloc1", 00:23:34.764 "name": "Malloc1", 00:23:34.764 "nguid": "923E6FE7BE4F4F5B91BEBD59256EEA8F", 00:23:34.764 "uuid": "923e6fe7-be4f-4f5b-91be-bd59256eea8f" 00:23:34.764 } 00:23:34.764 ] 00:23:34.764 } 00:23:34.764 ] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2294746 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.764 rmmod nvme_tcp 00:23:34.764 rmmod nvme_fabrics 00:23:34.764 rmmod nvme_keyring 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2294649 ']' 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2294649 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 2294649 ']' 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 2294649 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:34.764 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2294649 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2294649' 00:23:35.023 killing process with pid 2294649 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 2294649 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 2294649 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.023 12:12:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.579 12:12:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:37.579 00:23:37.579 real 0m10.789s 00:23:37.579 user 0m7.654s 00:23:37.579 sys 0m5.803s 00:23:37.579 12:12:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:37.579 12:12:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.579 ************************************ 00:23:37.579 END TEST nvmf_aer 00:23:37.579 ************************************ 00:23:37.579 12:12:26 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:37.579 12:12:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:37.579 12:12:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:37.579 12:12:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.579 ************************************ 00:23:37.579 START TEST nvmf_async_init 00:23:37.579 ************************************ 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:37.579 * Looking for test storage... 00:23:37.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.579 12:12:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=eb006786dc9a4ded8b8238da4f89e32c 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.580 12:12:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:44.144 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:44.144 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:44.144 Found net devices under 0000:af:00.0: cvl_0_0 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:44.144 Found net devices under 0000:af:00.1: cvl_0_1 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.144 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:23:44.145 00:23:44.145 --- 10.0.0.2 ping statistics --- 00:23:44.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.145 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:23:44.145 00:23:44.145 --- 10.0.0.1 ping statistics --- 00:23:44.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.145 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2298475 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2298475 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 2298475 ']' 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:44.145 12:12:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.145 [2024-06-10 12:12:33.586975] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:44.145 [2024-06-10 12:12:33.587024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.145 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.145 [2024-06-10 12:12:33.661054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.403 [2024-06-10 12:12:33.735582] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.403 [2024-06-10 12:12:33.735617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.403 [2024-06-10 12:12:33.735626] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.403 [2024-06-10 12:12:33.735638] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.403 [2024-06-10 12:12:33.735662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.403 [2024-06-10 12:12:33.735687] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 [2024-06-10 12:12:34.438234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 null0 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g eb006786dc9a4ded8b8238da4f89e32c 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.970 [2024-06-10 12:12:34.482446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.970 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.229 nvme0n1 00:23:45.229 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.229 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.229 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.229 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.229 [ 00:23:45.229 { 00:23:45.229 "name": "nvme0n1", 00:23:45.229 "aliases": [ 00:23:45.229 "eb006786-dc9a-4ded-8b82-38da4f89e32c" 00:23:45.229 ], 00:23:45.229 "product_name": "NVMe disk", 00:23:45.229 "block_size": 512, 00:23:45.229 "num_blocks": 2097152, 00:23:45.229 "uuid": "eb006786-dc9a-4ded-8b82-38da4f89e32c", 00:23:45.229 "assigned_rate_limits": { 00:23:45.229 "rw_ios_per_sec": 0, 00:23:45.229 "rw_mbytes_per_sec": 0, 00:23:45.229 "r_mbytes_per_sec": 0, 00:23:45.229 "w_mbytes_per_sec": 0 00:23:45.229 }, 00:23:45.229 "claimed": false, 00:23:45.229 "zoned": false, 00:23:45.229 "supported_io_types": { 00:23:45.229 "read": true, 00:23:45.229 "write": true, 00:23:45.229 "unmap": false, 00:23:45.229 "write_zeroes": true, 00:23:45.229 "flush": true, 00:23:45.229 "reset": true, 00:23:45.229 "compare": true, 00:23:45.229 "compare_and_write": true, 00:23:45.229 "abort": true, 00:23:45.229 "nvme_admin": true, 00:23:45.229 "nvme_io": true 00:23:45.229 }, 00:23:45.229 "memory_domains": [ 00:23:45.229 { 00:23:45.229 "dma_device_id": "system", 00:23:45.229 "dma_device_type": 1 00:23:45.229 } 00:23:45.229 ], 00:23:45.229 "driver_specific": { 00:23:45.229 "nvme": [ 00:23:45.229 { 00:23:45.229 "trid": { 00:23:45.229 "trtype": "TCP", 00:23:45.229 "adrfam": "IPv4", 00:23:45.229 "traddr": "10.0.0.2", 00:23:45.229 "trsvcid": "4420", 00:23:45.229 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.229 }, 00:23:45.229 "ctrlr_data": { 00:23:45.229 "cntlid": 1, 00:23:45.229 "vendor_id": "0x8086", 00:23:45.229 "model_number": "SPDK bdev Controller", 00:23:45.229 "serial_number": "00000000000000000000", 00:23:45.229 "firmware_revision": "24.09", 00:23:45.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.229 "oacs": { 00:23:45.229 "security": 0, 00:23:45.229 "format": 0, 00:23:45.229 "firmware": 0, 00:23:45.229 "ns_manage": 0 00:23:45.229 }, 00:23:45.229 "multi_ctrlr": true, 00:23:45.229 "ana_reporting": false 00:23:45.488 }, 00:23:45.488 "vs": { 00:23:45.488 "nvme_version": "1.3" 00:23:45.488 }, 00:23:45.488 "ns_data": { 00:23:45.488 "id": 1, 00:23:45.488 "can_share": true 00:23:45.488 } 00:23:45.488 } 00:23:45.488 ], 00:23:45.488 "mp_policy": "active_passive" 00:23:45.488 } 00:23:45.488 } 00:23:45.488 ] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 [2024-06-10 12:12:34.755049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:45.488 [2024-06-10 12:12:34.755117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912480 (9): Bad file descriptor 00:23:45.488 [2024-06-10 12:12:34.886552] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 [ 00:23:45.488 { 00:23:45.488 "name": "nvme0n1", 00:23:45.488 "aliases": [ 00:23:45.488 "eb006786-dc9a-4ded-8b82-38da4f89e32c" 00:23:45.488 ], 00:23:45.488 "product_name": "NVMe disk", 00:23:45.488 "block_size": 512, 00:23:45.488 "num_blocks": 2097152, 00:23:45.488 "uuid": "eb006786-dc9a-4ded-8b82-38da4f89e32c", 00:23:45.488 "assigned_rate_limits": { 00:23:45.488 "rw_ios_per_sec": 0, 00:23:45.488 "rw_mbytes_per_sec": 0, 00:23:45.488 "r_mbytes_per_sec": 0, 00:23:45.488 "w_mbytes_per_sec": 0 00:23:45.488 }, 00:23:45.488 "claimed": false, 00:23:45.488 "zoned": false, 00:23:45.488 "supported_io_types": { 00:23:45.488 "read": true, 00:23:45.488 "write": true, 00:23:45.488 "unmap": false, 00:23:45.488 "write_zeroes": true, 00:23:45.488 "flush": true, 00:23:45.488 "reset": true, 00:23:45.488 "compare": true, 00:23:45.488 "compare_and_write": true, 00:23:45.488 "abort": true, 00:23:45.488 "nvme_admin": true, 00:23:45.488 "nvme_io": true 00:23:45.488 }, 00:23:45.488 "memory_domains": [ 00:23:45.488 { 00:23:45.488 "dma_device_id": "system", 00:23:45.488 "dma_device_type": 1 00:23:45.488 } 00:23:45.488 ], 00:23:45.488 "driver_specific": { 00:23:45.488 "nvme": [ 00:23:45.488 { 00:23:45.488 "trid": { 00:23:45.488 "trtype": "TCP", 00:23:45.488 "adrfam": "IPv4", 00:23:45.488 "traddr": "10.0.0.2", 00:23:45.488 "trsvcid": "4420", 00:23:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.488 }, 00:23:45.488 "ctrlr_data": { 00:23:45.488 "cntlid": 2, 00:23:45.488 "vendor_id": "0x8086", 00:23:45.488 "model_number": "SPDK bdev Controller", 00:23:45.488 "serial_number": "00000000000000000000", 00:23:45.488 "firmware_revision": "24.09", 00:23:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.488 "oacs": { 00:23:45.488 "security": 0, 00:23:45.488 "format": 0, 00:23:45.488 "firmware": 0, 00:23:45.488 "ns_manage": 0 00:23:45.488 }, 00:23:45.488 "multi_ctrlr": true, 00:23:45.488 "ana_reporting": false 00:23:45.488 }, 00:23:45.488 "vs": { 00:23:45.488 "nvme_version": "1.3" 00:23:45.488 }, 00:23:45.488 "ns_data": { 00:23:45.488 "id": 1, 00:23:45.488 "can_share": true 00:23:45.488 } 00:23:45.488 } 00:23:45.488 ], 00:23:45.488 "mp_policy": "active_passive" 00:23:45.488 } 00:23:45.488 } 00:23:45.488 ] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KCnIqfH0gP 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KCnIqfH0gP 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 [2024-06-10 12:12:34.959680] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.488 [2024-06-10 12:12:34.959793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KCnIqfH0gP 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 [2024-06-10 12:12:34.967698] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KCnIqfH0gP 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.488 12:12:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.488 [2024-06-10 12:12:34.979742] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.488 [2024-06-10 12:12:34.979778] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:45.747 nvme0n1 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.747 [ 00:23:45.747 { 00:23:45.747 "name": "nvme0n1", 00:23:45.747 "aliases": [ 00:23:45.747 "eb006786-dc9a-4ded-8b82-38da4f89e32c" 00:23:45.747 ], 00:23:45.747 "product_name": "NVMe disk", 00:23:45.747 "block_size": 512, 00:23:45.747 "num_blocks": 2097152, 00:23:45.747 "uuid": "eb006786-dc9a-4ded-8b82-38da4f89e32c", 00:23:45.747 "assigned_rate_limits": { 00:23:45.747 "rw_ios_per_sec": 0, 00:23:45.747 "rw_mbytes_per_sec": 0, 00:23:45.747 "r_mbytes_per_sec": 0, 00:23:45.747 "w_mbytes_per_sec": 0 00:23:45.747 }, 00:23:45.747 "claimed": false, 00:23:45.747 "zoned": false, 00:23:45.747 "supported_io_types": { 00:23:45.747 "read": true, 00:23:45.747 "write": true, 00:23:45.747 "unmap": false, 00:23:45.747 "write_zeroes": true, 00:23:45.747 "flush": true, 00:23:45.747 "reset": true, 00:23:45.747 "compare": true, 00:23:45.747 "compare_and_write": true, 00:23:45.747 "abort": true, 00:23:45.747 "nvme_admin": true, 00:23:45.747 "nvme_io": true 00:23:45.747 }, 00:23:45.747 "memory_domains": [ 00:23:45.747 { 00:23:45.747 "dma_device_id": "system", 00:23:45.747 "dma_device_type": 1 00:23:45.747 } 00:23:45.747 ], 00:23:45.747 "driver_specific": { 00:23:45.747 "nvme": [ 00:23:45.747 { 00:23:45.747 "trid": { 00:23:45.747 "trtype": "TCP", 00:23:45.747 "adrfam": "IPv4", 00:23:45.747 "traddr": "10.0.0.2", 00:23:45.747 "trsvcid": "4421", 00:23:45.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.747 }, 00:23:45.747 "ctrlr_data": { 00:23:45.747 "cntlid": 3, 00:23:45.747 "vendor_id": "0x8086", 00:23:45.747 "model_number": "SPDK bdev Controller", 00:23:45.747 "serial_number": "00000000000000000000", 00:23:45.747 "firmware_revision": "24.09", 00:23:45.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.747 "oacs": { 00:23:45.747 "security": 0, 00:23:45.747 "format": 0, 00:23:45.747 "firmware": 0, 00:23:45.747 "ns_manage": 0 00:23:45.747 }, 00:23:45.747 "multi_ctrlr": true, 00:23:45.747 "ana_reporting": false 00:23:45.747 }, 00:23:45.747 "vs": { 00:23:45.747 "nvme_version": "1.3" 00:23:45.747 }, 00:23:45.747 "ns_data": { 00:23:45.747 "id": 1, 00:23:45.747 "can_share": true 00:23:45.747 } 00:23:45.747 } 00:23:45.747 ], 00:23:45.747 "mp_policy": "active_passive" 00:23:45.747 } 00:23:45.747 } 00:23:45.747 ] 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KCnIqfH0gP 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.747 rmmod nvme_tcp 00:23:45.747 rmmod nvme_fabrics 00:23:45.747 rmmod nvme_keyring 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2298475 ']' 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2298475 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 2298475 ']' 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 2298475 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:45.747 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2298475 00:23:45.748 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:45.748 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:45.748 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2298475' 00:23:45.748 killing process with pid 2298475 00:23:45.748 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 2298475 00:23:45.748 [2024-06-10 12:12:35.191913] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:45.748 [2024-06-10 12:12:35.191937] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:45.748 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 2298475 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.006 12:12:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.910 12:12:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.910 00:23:47.910 real 0m10.781s 00:23:47.910 user 0m3.782s 00:23:47.910 sys 0m5.599s 00:23:47.910 12:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:47.910 12:12:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.910 ************************************ 00:23:47.910 END TEST nvmf_async_init 00:23:47.910 ************************************ 00:23:48.169 12:12:37 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:48.169 12:12:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:48.169 12:12:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:48.169 12:12:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.169 ************************************ 00:23:48.169 START TEST dma 00:23:48.169 ************************************ 00:23:48.169 12:12:37 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:48.169 * Looking for test storage... 00:23:48.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.169 12:12:37 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.169 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.169 12:12:37 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.169 12:12:37 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.169 12:12:37 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.170 12:12:37 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.170 12:12:37 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.170 12:12:37 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.170 12:12:37 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:48.170 12:12:37 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.170 12:12:37 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.170 12:12:37 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:48.170 12:12:37 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:48.170 00:23:48.170 real 0m0.132s 00:23:48.170 user 0m0.065s 00:23:48.170 sys 0m0.077s 00:23:48.170 12:12:37 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:48.170 12:12:37 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:48.170 ************************************ 00:23:48.170 END TEST dma 00:23:48.170 ************************************ 00:23:48.170 12:12:37 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:48.170 12:12:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:48.170 12:12:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:48.170 12:12:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.428 ************************************ 00:23:48.428 START TEST nvmf_identify 00:23:48.428 ************************************ 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:48.428 * Looking for test storage... 00:23:48.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.428 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.429 12:12:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:54.986 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:54.986 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:54.986 Found net devices under 0000:af:00.0: cvl_0_0 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:54.986 Found net devices under 0000:af:00.1: cvl_0_1 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:23:54.986 00:23:54.986 --- 10.0.0.2 ping statistics --- 00:23:54.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.986 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:54.986 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:54.987 00:23:54.987 --- 10.0.0.1 ping statistics --- 00:23:54.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.987 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2302452 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2302452 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 2302452 ']' 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.987 12:12:43 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:54.987 [2024-06-10 12:12:43.981963] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:54.987 [2024-06-10 12:12:43.982009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.987 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.987 [2024-06-10 12:12:44.056961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.987 [2024-06-10 12:12:44.133089] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.987 [2024-06-10 12:12:44.133127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.987 [2024-06-10 12:12:44.133137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.987 [2024-06-10 12:12:44.133145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.987 [2024-06-10 12:12:44.133152] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.987 [2024-06-10 12:12:44.133195] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.987 [2024-06-10 12:12:44.133214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.987 [2024-06-10 12:12:44.133302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.987 [2024-06-10 12:12:44.133304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 [2024-06-10 12:12:44.792322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 Malloc0 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 [2024-06-10 12:12:44.895025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.554 [ 00:23:55.554 { 00:23:55.554 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.554 "subtype": "Discovery", 00:23:55.554 "listen_addresses": [ 00:23:55.554 { 00:23:55.554 "trtype": "TCP", 00:23:55.554 "adrfam": "IPv4", 00:23:55.554 "traddr": "10.0.0.2", 00:23:55.554 "trsvcid": "4420" 00:23:55.554 } 00:23:55.554 ], 00:23:55.554 "allow_any_host": true, 00:23:55.554 "hosts": [] 00:23:55.554 }, 00:23:55.554 { 00:23:55.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.554 "subtype": "NVMe", 00:23:55.554 "listen_addresses": [ 00:23:55.554 { 00:23:55.554 "trtype": "TCP", 00:23:55.554 "adrfam": "IPv4", 00:23:55.554 "traddr": "10.0.0.2", 00:23:55.554 "trsvcid": "4420" 00:23:55.554 } 00:23:55.554 ], 00:23:55.554 "allow_any_host": true, 00:23:55.554 "hosts": [], 00:23:55.554 "serial_number": "SPDK00000000000001", 00:23:55.554 "model_number": "SPDK bdev Controller", 00:23:55.554 "max_namespaces": 32, 00:23:55.554 "min_cntlid": 1, 00:23:55.554 "max_cntlid": 65519, 00:23:55.554 "namespaces": [ 00:23:55.554 { 00:23:55.554 "nsid": 1, 00:23:55.554 "bdev_name": "Malloc0", 00:23:55.554 "name": "Malloc0", 00:23:55.554 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:55.554 "eui64": "ABCDEF0123456789", 00:23:55.554 "uuid": "288cafcd-2089-4d4e-832c-d2a337d96116" 00:23:55.554 } 00:23:55.554 ] 00:23:55.554 } 00:23:55.554 ] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.554 12:12:44 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:55.554 [2024-06-10 12:12:44.952245] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:55.554 [2024-06-10 12:12:44.952286] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302599 ] 00:23:55.554 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.554 [2024-06-10 12:12:44.982859] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:55.554 [2024-06-10 12:12:44.982908] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.554 [2024-06-10 12:12:44.982915] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.554 [2024-06-10 12:12:44.982929] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.554 [2024-06-10 12:12:44.982939] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.554 [2024-06-10 12:12:44.983215] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:55.554 [2024-06-10 12:12:44.983250] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x179ef00 0 00:23:55.554 [2024-06-10 12:12:44.997486] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.554 [2024-06-10 12:12:44.997500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.554 [2024-06-10 12:12:44.997506] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.554 [2024-06-10 12:12:44.997511] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.555 [2024-06-10 12:12:44.997550] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:44.997557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:44.997562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:44.997575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.555 [2024-06-10 12:12:44.997592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.005488] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.005502] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.005507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.005524] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.555 [2024-06-10 12:12:45.005531] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:55.555 [2024-06-10 12:12:45.005538] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:55.555 [2024-06-10 12:12:45.005553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.005572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.005587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.005754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.005761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.005766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.005778] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:55.555 [2024-06-10 12:12:45.005786] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:55.555 [2024-06-10 12:12:45.005794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005799] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005803] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.005810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.005823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.005890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.005897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.005901] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005906] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.005913] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:55.555 [2024-06-10 12:12:45.005922] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.555 [2024-06-10 12:12:45.005929] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.005939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.005946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.005957] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.006027] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.006036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.006041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006045] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.006052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.555 [2024-06-10 12:12:45.006062] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006067] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006072] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.006079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.006090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.006157] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.006164] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.006168] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006173] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.006180] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:55.555 [2024-06-10 12:12:45.006186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:55.555 [2024-06-10 12:12:45.006194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.555 [2024-06-10 12:12:45.006301] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:55.555 [2024-06-10 12:12:45.006307] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.555 [2024-06-10 12:12:45.006317] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006326] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.006333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.006344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.006415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.006421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.006426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006430] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.006437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.555 [2024-06-10 12:12:45.006447] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006452] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006456] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.006463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.006475] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.006563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.006571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.006575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006580] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.006587] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.555 [2024-06-10 12:12:45.006593] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:55.555 [2024-06-10 12:12:45.006601] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:55.555 [2024-06-10 12:12:45.006611] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.555 [2024-06-10 12:12:45.006621] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006626] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.555 [2024-06-10 12:12:45.006633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.555 [2024-06-10 12:12:45.006645] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.555 [2024-06-10 12:12:45.006741] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.555 [2024-06-10 12:12:45.006748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.555 [2024-06-10 12:12:45.006753] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006758] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x179ef00): datao=0, datal=4096, cccid=0 00:23:55.555 [2024-06-10 12:12:45.006764] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1809df0) on tqpair(0x179ef00): expected_datao=0, payload_size=4096 00:23:55.555 [2024-06-10 12:12:45.006770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006788] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.006794] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.047554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.555 [2024-06-10 12:12:45.047567] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.555 [2024-06-10 12:12:45.047572] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.555 [2024-06-10 12:12:45.047577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.555 [2024-06-10 12:12:45.047589] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:55.556 [2024-06-10 12:12:45.047595] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:55.556 [2024-06-10 12:12:45.047601] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:55.556 [2024-06-10 12:12:45.047611] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:55.556 [2024-06-10 12:12:45.047617] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:55.556 [2024-06-10 12:12:45.047624] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:55.556 [2024-06-10 12:12:45.047635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.556 [2024-06-10 12:12:45.047644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047651] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.047665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.556 [2024-06-10 12:12:45.047678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.556 [2024-06-10 12:12:45.047748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.556 [2024-06-10 12:12:45.047755] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.556 [2024-06-10 12:12:45.047759] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1809df0) on tqpair=0x179ef00 00:23:55.556 [2024-06-10 12:12:45.047773] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047777] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.047789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.556 [2024-06-10 12:12:45.047796] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.047811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.556 [2024-06-10 12:12:45.047818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.047834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.556 [2024-06-10 12:12:45.047840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047845] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047850] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.047856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.556 [2024-06-10 12:12:45.047862] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.556 [2024-06-10 12:12:45.047875] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.556 [2024-06-10 12:12:45.047882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.047887] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.047894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.556 [2024-06-10 12:12:45.047907] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809df0, cid 0, qid 0 00:23:55.556 [2024-06-10 12:12:45.047913] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1809f50, cid 1, qid 0 00:23:55.556 [2024-06-10 12:12:45.047918] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a0b0, cid 2, qid 0 00:23:55.556 [2024-06-10 12:12:45.047923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.556 [2024-06-10 12:12:45.047930] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a370, cid 4, qid 0 00:23:55.556 [2024-06-10 12:12:45.048025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.556 [2024-06-10 12:12:45.048032] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.556 [2024-06-10 12:12:45.048036] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048041] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a370) on tqpair=0x179ef00 00:23:55.556 [2024-06-10 12:12:45.048048] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:55.556 [2024-06-10 12:12:45.048054] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:55.556 [2024-06-10 12:12:45.048066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048071] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.048078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.556 [2024-06-10 12:12:45.048089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a370, cid 4, qid 0 00:23:55.556 [2024-06-10 12:12:45.048177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.556 [2024-06-10 12:12:45.048184] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.556 [2024-06-10 12:12:45.048189] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048194] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x179ef00): datao=0, datal=4096, cccid=4 00:23:55.556 [2024-06-10 12:12:45.048200] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x180a370) on tqpair(0x179ef00): expected_datao=0, payload_size=4096 00:23:55.556 [2024-06-10 12:12:45.048206] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048213] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048218] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048250] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.556 [2024-06-10 12:12:45.048257] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.556 [2024-06-10 12:12:45.048261] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a370) on tqpair=0x179ef00 00:23:55.556 [2024-06-10 12:12:45.048280] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:55.556 [2024-06-10 12:12:45.048305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048310] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.048317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.556 [2024-06-10 12:12:45.048325] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.048334] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x179ef00) 00:23:55.556 [2024-06-10 12:12:45.048341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.556 [2024-06-10 12:12:45.048356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a370, cid 4, qid 0 00:23:55.556 [2024-06-10 12:12:45.048362] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a4d0, cid 5, qid 0 00:23:55.556 [2024-06-10 12:12:45.048463] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.556 [2024-06-10 12:12:45.048469] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.556 [2024-06-10 12:12:45.052483] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.052491] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x179ef00): datao=0, datal=1024, cccid=4 00:23:55.556 [2024-06-10 12:12:45.052497] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x180a370) on tqpair(0x179ef00): expected_datao=0, payload_size=1024 00:23:55.556 [2024-06-10 12:12:45.052502] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.052510] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.052514] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.052521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.556 [2024-06-10 12:12:45.052527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.556 [2024-06-10 12:12:45.052531] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.556 [2024-06-10 12:12:45.052536] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a4d0) on tqpair=0x179ef00 00:23:55.821 [2024-06-10 12:12:45.092493] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.821 [2024-06-10 12:12:45.092504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.821 [2024-06-10 12:12:45.092509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a370) on tqpair=0x179ef00 00:23:55.821 [2024-06-10 12:12:45.092531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x179ef00) 00:23:55.821 [2024-06-10 12:12:45.092544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.821 [2024-06-10 12:12:45.092563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a370, cid 4, qid 0 00:23:55.821 [2024-06-10 12:12:45.092723] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.821 [2024-06-10 12:12:45.092730] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.821 [2024-06-10 12:12:45.092735] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092739] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x179ef00): datao=0, datal=3072, cccid=4 00:23:55.821 [2024-06-10 12:12:45.092745] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x180a370) on tqpair(0x179ef00): expected_datao=0, payload_size=3072 00:23:55.821 [2024-06-10 12:12:45.092751] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092758] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092763] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092800] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.821 [2024-06-10 12:12:45.092807] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.821 [2024-06-10 12:12:45.092812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092816] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a370) on tqpair=0x179ef00 00:23:55.821 [2024-06-10 12:12:45.092826] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x179ef00) 00:23:55.821 [2024-06-10 12:12:45.092838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.821 [2024-06-10 12:12:45.092855] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a370, cid 4, qid 0 00:23:55.821 [2024-06-10 12:12:45.092934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.821 [2024-06-10 12:12:45.092940] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.821 [2024-06-10 12:12:45.092947] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092952] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x179ef00): datao=0, datal=8, cccid=4 00:23:55.821 [2024-06-10 12:12:45.092958] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x180a370) on tqpair(0x179ef00): expected_datao=0, payload_size=8 00:23:55.821 [2024-06-10 12:12:45.092964] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092970] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.092975] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.133664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.821 [2024-06-10 12:12:45.133675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.821 [2024-06-10 12:12:45.133680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.821 [2024-06-10 12:12:45.133685] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a370) on tqpair=0x179ef00 00:23:55.821 ===================================================== 00:23:55.821 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:55.821 ===================================================== 00:23:55.821 Controller Capabilities/Features 00:23:55.821 ================================ 00:23:55.821 Vendor ID: 0000 00:23:55.821 Subsystem Vendor ID: 0000 00:23:55.821 Serial Number: .................... 00:23:55.821 Model Number: ........................................ 00:23:55.821 Firmware Version: 24.09 00:23:55.821 Recommended Arb Burst: 0 00:23:55.821 IEEE OUI Identifier: 00 00 00 00:23:55.822 Multi-path I/O 00:23:55.822 May have multiple subsystem ports: No 00:23:55.822 May have multiple controllers: No 00:23:55.822 Associated with SR-IOV VF: No 00:23:55.822 Max Data Transfer Size: 131072 00:23:55.822 Max Number of Namespaces: 0 00:23:55.822 Max Number of I/O Queues: 1024 00:23:55.822 NVMe Specification Version (VS): 1.3 00:23:55.822 NVMe Specification Version (Identify): 1.3 00:23:55.822 Maximum Queue Entries: 128 00:23:55.822 Contiguous Queues Required: Yes 00:23:55.822 Arbitration Mechanisms Supported 00:23:55.822 Weighted Round Robin: Not Supported 00:23:55.822 Vendor Specific: Not Supported 00:23:55.822 Reset Timeout: 15000 ms 00:23:55.822 Doorbell Stride: 4 bytes 00:23:55.822 NVM Subsystem Reset: Not Supported 00:23:55.822 Command Sets Supported 00:23:55.822 NVM Command Set: Supported 00:23:55.822 Boot Partition: Not Supported 00:23:55.822 Memory Page Size Minimum: 4096 bytes 00:23:55.822 Memory Page Size Maximum: 4096 bytes 00:23:55.822 Persistent Memory Region: Not Supported 00:23:55.822 Optional Asynchronous Events Supported 00:23:55.822 Namespace Attribute Notices: Not Supported 00:23:55.822 Firmware Activation Notices: Not Supported 00:23:55.822 ANA Change Notices: Not Supported 00:23:55.822 PLE Aggregate Log Change Notices: Not Supported 00:23:55.822 LBA Status Info Alert Notices: Not Supported 00:23:55.822 EGE Aggregate Log Change Notices: Not Supported 00:23:55.822 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.822 Zone Descriptor Change Notices: Not Supported 00:23:55.822 Discovery Log Change Notices: Supported 00:23:55.822 Controller Attributes 00:23:55.822 128-bit Host Identifier: Not Supported 00:23:55.822 Non-Operational Permissive Mode: Not Supported 00:23:55.822 NVM Sets: Not Supported 00:23:55.822 Read Recovery Levels: Not Supported 00:23:55.822 Endurance Groups: Not Supported 00:23:55.822 Predictable Latency Mode: Not Supported 00:23:55.822 Traffic Based Keep ALive: Not Supported 00:23:55.822 Namespace Granularity: Not Supported 00:23:55.822 SQ Associations: Not Supported 00:23:55.822 UUID List: Not Supported 00:23:55.822 Multi-Domain Subsystem: Not Supported 00:23:55.822 Fixed Capacity Management: Not Supported 00:23:55.822 Variable Capacity Management: Not Supported 00:23:55.822 Delete Endurance Group: Not Supported 00:23:55.822 Delete NVM Set: Not Supported 00:23:55.822 Extended LBA Formats Supported: Not Supported 00:23:55.822 Flexible Data Placement Supported: Not Supported 00:23:55.822 00:23:55.822 Controller Memory Buffer Support 00:23:55.822 ================================ 00:23:55.822 Supported: No 00:23:55.822 00:23:55.822 Persistent Memory Region Support 00:23:55.822 ================================ 00:23:55.822 Supported: No 00:23:55.822 00:23:55.822 Admin Command Set Attributes 00:23:55.822 ============================ 00:23:55.822 Security Send/Receive: Not Supported 00:23:55.822 Format NVM: Not Supported 00:23:55.822 Firmware Activate/Download: Not Supported 00:23:55.822 Namespace Management: Not Supported 00:23:55.822 Device Self-Test: Not Supported 00:23:55.822 Directives: Not Supported 00:23:55.822 NVMe-MI: Not Supported 00:23:55.822 Virtualization Management: Not Supported 00:23:55.822 Doorbell Buffer Config: Not Supported 00:23:55.822 Get LBA Status Capability: Not Supported 00:23:55.822 Command & Feature Lockdown Capability: Not Supported 00:23:55.822 Abort Command Limit: 1 00:23:55.822 Async Event Request Limit: 4 00:23:55.822 Number of Firmware Slots: N/A 00:23:55.822 Firmware Slot 1 Read-Only: N/A 00:23:55.822 Firmware Activation Without Reset: N/A 00:23:55.822 Multiple Update Detection Support: N/A 00:23:55.822 Firmware Update Granularity: No Information Provided 00:23:55.822 Per-Namespace SMART Log: No 00:23:55.822 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.822 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:55.822 Command Effects Log Page: Not Supported 00:23:55.822 Get Log Page Extended Data: Supported 00:23:55.822 Telemetry Log Pages: Not Supported 00:23:55.822 Persistent Event Log Pages: Not Supported 00:23:55.822 Supported Log Pages Log Page: May Support 00:23:55.822 Commands Supported & Effects Log Page: Not Supported 00:23:55.822 Feature Identifiers & Effects Log Page:May Support 00:23:55.822 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.822 Data Area 4 for Telemetry Log: Not Supported 00:23:55.822 Error Log Page Entries Supported: 128 00:23:55.822 Keep Alive: Not Supported 00:23:55.822 00:23:55.822 NVM Command Set Attributes 00:23:55.822 ========================== 00:23:55.822 Submission Queue Entry Size 00:23:55.822 Max: 1 00:23:55.822 Min: 1 00:23:55.822 Completion Queue Entry Size 00:23:55.822 Max: 1 00:23:55.822 Min: 1 00:23:55.822 Number of Namespaces: 0 00:23:55.822 Compare Command: Not Supported 00:23:55.822 Write Uncorrectable Command: Not Supported 00:23:55.822 Dataset Management Command: Not Supported 00:23:55.822 Write Zeroes Command: Not Supported 00:23:55.822 Set Features Save Field: Not Supported 00:23:55.822 Reservations: Not Supported 00:23:55.822 Timestamp: Not Supported 00:23:55.822 Copy: Not Supported 00:23:55.822 Volatile Write Cache: Not Present 00:23:55.822 Atomic Write Unit (Normal): 1 00:23:55.822 Atomic Write Unit (PFail): 1 00:23:55.822 Atomic Compare & Write Unit: 1 00:23:55.822 Fused Compare & Write: Supported 00:23:55.822 Scatter-Gather List 00:23:55.822 SGL Command Set: Supported 00:23:55.822 SGL Keyed: Supported 00:23:55.822 SGL Bit Bucket Descriptor: Not Supported 00:23:55.822 SGL Metadata Pointer: Not Supported 00:23:55.822 Oversized SGL: Not Supported 00:23:55.822 SGL Metadata Address: Not Supported 00:23:55.822 SGL Offset: Supported 00:23:55.822 Transport SGL Data Block: Not Supported 00:23:55.822 Replay Protected Memory Block: Not Supported 00:23:55.822 00:23:55.822 Firmware Slot Information 00:23:55.822 ========================= 00:23:55.822 Active slot: 0 00:23:55.822 00:23:55.822 00:23:55.822 Error Log 00:23:55.822 ========= 00:23:55.822 00:23:55.822 Active Namespaces 00:23:55.822 ================= 00:23:55.822 Discovery Log Page 00:23:55.822 ================== 00:23:55.822 Generation Counter: 2 00:23:55.822 Number of Records: 2 00:23:55.822 Record Format: 0 00:23:55.822 00:23:55.822 Discovery Log Entry 0 00:23:55.822 ---------------------- 00:23:55.822 Transport Type: 3 (TCP) 00:23:55.822 Address Family: 1 (IPv4) 00:23:55.822 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:55.822 Entry Flags: 00:23:55.822 Duplicate Returned Information: 1 00:23:55.822 Explicit Persistent Connection Support for Discovery: 1 00:23:55.822 Transport Requirements: 00:23:55.822 Secure Channel: Not Required 00:23:55.822 Port ID: 0 (0x0000) 00:23:55.822 Controller ID: 65535 (0xffff) 00:23:55.822 Admin Max SQ Size: 128 00:23:55.822 Transport Service Identifier: 4420 00:23:55.822 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:55.822 Transport Address: 10.0.0.2 00:23:55.822 Discovery Log Entry 1 00:23:55.822 ---------------------- 00:23:55.822 Transport Type: 3 (TCP) 00:23:55.822 Address Family: 1 (IPv4) 00:23:55.822 Subsystem Type: 2 (NVM Subsystem) 00:23:55.822 Entry Flags: 00:23:55.822 Duplicate Returned Information: 0 00:23:55.822 Explicit Persistent Connection Support for Discovery: 0 00:23:55.822 Transport Requirements: 00:23:55.822 Secure Channel: Not Required 00:23:55.822 Port ID: 0 (0x0000) 00:23:55.822 Controller ID: 65535 (0xffff) 00:23:55.822 Admin Max SQ Size: 128 00:23:55.822 Transport Service Identifier: 4420 00:23:55.822 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:55.822 Transport Address: 10.0.0.2 [2024-06-10 12:12:45.133768] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:55.822 [2024-06-10 12:12:45.133782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.822 [2024-06-10 12:12:45.133790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.822 [2024-06-10 12:12:45.133797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.822 [2024-06-10 12:12:45.133804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.822 [2024-06-10 12:12:45.133814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.822 [2024-06-10 12:12:45.133819] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.822 [2024-06-10 12:12:45.133823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.822 [2024-06-10 12:12:45.133831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.822 [2024-06-10 12:12:45.133846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.133911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.133918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.133922] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.133927] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.133938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.133943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.133948] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.133955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.133970] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134044] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134056] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134060] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134067] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:55.823 [2024-06-10 12:12:45.134073] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:55.823 [2024-06-10 12:12:45.134085] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134094] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134113] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134179] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134185] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134206] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134211] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134215] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134325] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134445] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134454] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134472] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134552] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134568] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134577] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134581] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134600] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134705] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134798] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134802] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.134908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.134914] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.134919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.134934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134939] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.134944] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.134950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.134963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.135029] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.135036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.135041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135045] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.135056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.135074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.135085] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.135148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.135155] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.135159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135164] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.135175] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.823 [2024-06-10 12:12:45.135191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.823 [2024-06-10 12:12:45.135202] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.823 [2024-06-10 12:12:45.135268] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.823 [2024-06-10 12:12:45.135275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.823 [2024-06-10 12:12:45.135279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135284] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.823 [2024-06-10 12:12:45.135295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.823 [2024-06-10 12:12:45.135300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.135312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.135322] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.135388] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.135394] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.135399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135404] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.135414] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135424] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.135430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.135441] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.135512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.135519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.135524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135528] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.135539] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135544] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.135557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.135569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.135634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.135640] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.135645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.135660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.135676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.135687] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.135753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.135759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.135764] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.135779] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.135796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.135807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.135874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.135881] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.135885] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135890] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.135901] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.135910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.135917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.135928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.135993] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.136000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.136004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136009] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.136020] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136029] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.136038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.136049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.136117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.136123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.136128] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136132] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.136143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136148] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136153] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.136159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.136170] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.136236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.136242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.136247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.136262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.136279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.136290] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.136389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.136396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.136400] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136405] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.136416] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.136426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.136433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.136444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.142492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.142504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.142509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.142514] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.142528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.142533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.142537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x179ef00) 00:23:55.824 [2024-06-10 12:12:45.142545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.824 [2024-06-10 12:12:45.142561] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x180a210, cid 3, qid 0 00:23:55.824 [2024-06-10 12:12:45.142763] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.824 [2024-06-10 12:12:45.142770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.824 [2024-06-10 12:12:45.142774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.824 [2024-06-10 12:12:45.142779] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x180a210) on tqpair=0x179ef00 00:23:55.824 [2024-06-10 12:12:45.142788] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:23:55.824 00:23:55.824 12:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:55.824 [2024-06-10 12:12:45.182584] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:23:55.824 [2024-06-10 12:12:45.182624] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302698 ] 00:23:55.824 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.825 [2024-06-10 12:12:45.213596] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:55.825 [2024-06-10 12:12:45.213643] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.825 [2024-06-10 12:12:45.213649] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.825 [2024-06-10 12:12:45.213661] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.825 [2024-06-10 12:12:45.213671] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.825 [2024-06-10 12:12:45.213901] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:55.825 [2024-06-10 12:12:45.213929] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bbaf00 0 00:23:55.825 [2024-06-10 12:12:45.228493] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.825 [2024-06-10 12:12:45.228507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.825 [2024-06-10 12:12:45.228513] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.825 [2024-06-10 12:12:45.228517] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.825 [2024-06-10 12:12:45.228553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.228559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.228563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.228575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.825 [2024-06-10 12:12:45.228591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.236492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.236503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.236508] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236513] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.236527] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.825 [2024-06-10 12:12:45.236536] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:55.825 [2024-06-10 12:12:45.236543] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:55.825 [2024-06-10 12:12:45.236556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.236574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.236588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.236671] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.236678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.236682] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236687] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.236694] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:55.825 [2024-06-10 12:12:45.236703] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:55.825 [2024-06-10 12:12:45.236710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236715] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.236726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.236738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.236824] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.236831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.236835] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236840] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.236846] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:55.825 [2024-06-10 12:12:45.236855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.825 [2024-06-10 12:12:45.236863] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236872] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.236879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.236890] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.236959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.236966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.236970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.236982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.825 [2024-06-10 12:12:45.236994] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.236999] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237003] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.237010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.237021] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.237089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.237096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.237100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237106] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.237113] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.825 [2024-06-10 12:12:45.237119] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.825 [2024-06-10 12:12:45.237128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.825 [2024-06-10 12:12:45.237234] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:55.825 [2024-06-10 12:12:45.237239] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.825 [2024-06-10 12:12:45.237247] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237252] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237257] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.237263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.237275] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.237349] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.237356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.237360] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.237371] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.825 [2024-06-10 12:12:45.237382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.237398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.237410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.237490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.825 [2024-06-10 12:12:45.237497] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.825 [2024-06-10 12:12:45.237501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237506] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.825 [2024-06-10 12:12:45.237512] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.825 [2024-06-10 12:12:45.237520] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.825 [2024-06-10 12:12:45.237530] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:55.825 [2024-06-10 12:12:45.237543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.825 [2024-06-10 12:12:45.237551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.825 [2024-06-10 12:12:45.237556] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.825 [2024-06-10 12:12:45.237563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.825 [2024-06-10 12:12:45.237575] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.825 [2024-06-10 12:12:45.237688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.825 [2024-06-10 12:12:45.237695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.825 [2024-06-10 12:12:45.237700] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237704] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=4096, cccid=0 00:23:55.826 [2024-06-10 12:12:45.237710] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c25df0) on tqpair(0x1bbaf00): expected_datao=0, payload_size=4096 00:23:55.826 [2024-06-10 12:12:45.237716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237723] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237728] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.826 [2024-06-10 12:12:45.237756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.826 [2024-06-10 12:12:45.237760] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237765] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.826 [2024-06-10 12:12:45.237774] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:55.826 [2024-06-10 12:12:45.237780] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:55.826 [2024-06-10 12:12:45.237786] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:55.826 [2024-06-10 12:12:45.237794] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:55.826 [2024-06-10 12:12:45.237799] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:55.826 [2024-06-10 12:12:45.237805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.237815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.237823] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237832] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.237840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.826 [2024-06-10 12:12:45.237851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.826 [2024-06-10 12:12:45.237921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.826 [2024-06-10 12:12:45.237929] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.826 [2024-06-10 12:12:45.237934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237939] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c25df0) on tqpair=0x1bbaf00 00:23:55.826 [2024-06-10 12:12:45.237946] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.237962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.826 [2024-06-10 12:12:45.237969] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237973] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237978] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.237984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.826 [2024-06-10 12:12:45.237991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.237996] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.238006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.826 [2024-06-10 12:12:45.238013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238018] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238022] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.238028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.826 [2024-06-10 12:12:45.238034] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238046] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238058] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.238065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.826 [2024-06-10 12:12:45.238077] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25df0, cid 0, qid 0 00:23:55.826 [2024-06-10 12:12:45.238083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c25f50, cid 1, qid 0 00:23:55.826 [2024-06-10 12:12:45.238088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c260b0, cid 2, qid 0 00:23:55.826 [2024-06-10 12:12:45.238093] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.826 [2024-06-10 12:12:45.238099] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.826 [2024-06-10 12:12:45.238193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.826 [2024-06-10 12:12:45.238200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.826 [2024-06-10 12:12:45.238204] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.826 [2024-06-10 12:12:45.238215] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:55.826 [2024-06-10 12:12:45.238223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238239] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238250] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.826 [2024-06-10 12:12:45.238261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.826 [2024-06-10 12:12:45.238272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.826 [2024-06-10 12:12:45.238338] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.826 [2024-06-10 12:12:45.238345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.826 [2024-06-10 12:12:45.238349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.826 [2024-06-10 12:12:45.238398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238409] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:55.826 [2024-06-10 12:12:45.238417] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.826 [2024-06-10 12:12:45.238422] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.238428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.238440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.827 [2024-06-10 12:12:45.238523] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.827 [2024-06-10 12:12:45.238530] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.827 [2024-06-10 12:12:45.238535] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238539] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=4096, cccid=4 00:23:55.827 [2024-06-10 12:12:45.238545] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26370) on tqpair(0x1bbaf00): expected_datao=0, payload_size=4096 00:23:55.827 [2024-06-10 12:12:45.238551] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238566] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238571] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.238637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.238641] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238646] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.238656] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:55.827 [2024-06-10 12:12:45.238668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.238678] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.238688] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.238700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.238712] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.827 [2024-06-10 12:12:45.238787] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.827 [2024-06-10 12:12:45.238794] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.827 [2024-06-10 12:12:45.238798] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238803] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=4096, cccid=4 00:23:55.827 [2024-06-10 12:12:45.238809] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26370) on tqpair(0x1bbaf00): expected_datao=0, payload_size=4096 00:23:55.827 [2024-06-10 12:12:45.238814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238826] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238830] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.238877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.238881] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238886] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.238899] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.238909] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.238916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.238921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.238928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.238940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.827 [2024-06-10 12:12:45.239014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.827 [2024-06-10 12:12:45.239021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.827 [2024-06-10 12:12:45.239025] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239030] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=4096, cccid=4 00:23:55.827 [2024-06-10 12:12:45.239036] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26370) on tqpair(0x1bbaf00): expected_datao=0, payload_size=4096 00:23:55.827 [2024-06-10 12:12:45.239041] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239053] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239057] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239091] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.239098] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.239102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239107] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.239115] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.239135] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.239145] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.239152] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.239158] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.239164] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:55.827 [2024-06-10 12:12:45.239170] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:55.827 [2024-06-10 12:12:45.239177] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:55.827 [2024-06-10 12:12:45.239194] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239199] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.239206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.239213] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239218] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239222] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.239229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.827 [2024-06-10 12:12:45.239243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.827 [2024-06-10 12:12:45.239250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c264d0, cid 5, qid 0 00:23:55.827 [2024-06-10 12:12:45.239336] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.239343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.239347] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.239360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.239366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.239371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239375] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c264d0) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.239387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.239398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.239409] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c264d0, cid 5, qid 0 00:23:55.827 [2024-06-10 12:12:45.239491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.239499] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.239503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239508] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c264d0) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.239521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239525] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.239532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.239543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c264d0, cid 5, qid 0 00:23:55.827 [2024-06-10 12:12:45.239618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.239625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.239629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239634] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c264d0) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.239644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239649] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbaf00) 00:23:55.827 [2024-06-10 12:12:45.239656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.827 [2024-06-10 12:12:45.239667] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c264d0, cid 5, qid 0 00:23:55.827 [2024-06-10 12:12:45.239741] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.827 [2024-06-10 12:12:45.239747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.827 [2024-06-10 12:12:45.239752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239756] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c264d0) on tqpair=0x1bbaf00 00:23:55.827 [2024-06-10 12:12:45.239769] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.827 [2024-06-10 12:12:45.239774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bbaf00) 00:23:55.828 [2024-06-10 12:12:45.239781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.828 [2024-06-10 12:12:45.239788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.239793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bbaf00) 00:23:55.828 [2024-06-10 12:12:45.239799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.828 [2024-06-10 12:12:45.239807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.239812] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1bbaf00) 00:23:55.828 [2024-06-10 12:12:45.239818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.828 [2024-06-10 12:12:45.239826] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.239831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1bbaf00) 00:23:55.828 [2024-06-10 12:12:45.239837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.828 [2024-06-10 12:12:45.239849] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c264d0, cid 5, qid 0 00:23:55.828 [2024-06-10 12:12:45.239855] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26370, cid 4, qid 0 00:23:55.828 [2024-06-10 12:12:45.239860] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26630, cid 6, qid 0 00:23:55.828 [2024-06-10 12:12:45.239865] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26790, cid 7, qid 0 00:23:55.828 [2024-06-10 12:12:45.239997] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.828 [2024-06-10 12:12:45.240006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.828 [2024-06-10 12:12:45.240010] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240015] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=8192, cccid=5 00:23:55.828 [2024-06-10 12:12:45.240021] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c264d0) on tqpair(0x1bbaf00): expected_datao=0, payload_size=8192 00:23:55.828 [2024-06-10 12:12:45.240026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240065] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240070] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.828 [2024-06-10 12:12:45.240082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.828 [2024-06-10 12:12:45.240086] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240091] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=512, cccid=4 00:23:55.828 [2024-06-10 12:12:45.240096] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26370) on tqpair(0x1bbaf00): expected_datao=0, payload_size=512 00:23:55.828 [2024-06-10 12:12:45.240102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240109] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240113] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.828 [2024-06-10 12:12:45.240125] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.828 [2024-06-10 12:12:45.240130] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240134] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=512, cccid=6 00:23:55.828 [2024-06-10 12:12:45.240140] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26630) on tqpair(0x1bbaf00): expected_datao=0, payload_size=512 00:23:55.828 [2024-06-10 12:12:45.240145] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240152] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240157] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.828 [2024-06-10 12:12:45.240169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.828 [2024-06-10 12:12:45.240173] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240178] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bbaf00): datao=0, datal=4096, cccid=7 00:23:55.828 [2024-06-10 12:12:45.240183] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c26790) on tqpair(0x1bbaf00): expected_datao=0, payload_size=4096 00:23:55.828 [2024-06-10 12:12:45.240189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240196] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240200] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240209] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.828 [2024-06-10 12:12:45.240215] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.828 [2024-06-10 12:12:45.240219] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c264d0) on tqpair=0x1bbaf00 00:23:55.828 [2024-06-10 12:12:45.240238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.828 [2024-06-10 12:12:45.240245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.828 [2024-06-10 12:12:45.240249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26370) on tqpair=0x1bbaf00 00:23:55.828 [2024-06-10 12:12:45.240265] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.828 [2024-06-10 12:12:45.240272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.828 [2024-06-10 12:12:45.240276] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240281] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26630) on tqpair=0x1bbaf00 00:23:55.828 [2024-06-10 12:12:45.240291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.828 [2024-06-10 12:12:45.240298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.828 [2024-06-10 12:12:45.240302] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.828 [2024-06-10 12:12:45.240307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26790) on tqpair=0x1bbaf00 00:23:55.828 ===================================================== 00:23:55.828 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.828 ===================================================== 00:23:55.828 Controller Capabilities/Features 00:23:55.828 ================================ 00:23:55.828 Vendor ID: 8086 00:23:55.828 Subsystem Vendor ID: 8086 00:23:55.828 Serial Number: SPDK00000000000001 00:23:55.828 Model Number: SPDK bdev Controller 00:23:55.828 Firmware Version: 24.09 00:23:55.828 Recommended Arb Burst: 6 00:23:55.828 IEEE OUI Identifier: e4 d2 5c 00:23:55.828 Multi-path I/O 00:23:55.828 May have multiple subsystem ports: Yes 00:23:55.828 May have multiple controllers: Yes 00:23:55.828 Associated with SR-IOV VF: No 00:23:55.828 Max Data Transfer Size: 131072 00:23:55.828 Max Number of Namespaces: 32 00:23:55.828 Max Number of I/O Queues: 127 00:23:55.828 NVMe Specification Version (VS): 1.3 00:23:55.828 NVMe Specification Version (Identify): 1.3 00:23:55.828 Maximum Queue Entries: 128 00:23:55.828 Contiguous Queues Required: Yes 00:23:55.828 Arbitration Mechanisms Supported 00:23:55.828 Weighted Round Robin: Not Supported 00:23:55.828 Vendor Specific: Not Supported 00:23:55.828 Reset Timeout: 15000 ms 00:23:55.828 Doorbell Stride: 4 bytes 00:23:55.828 NVM Subsystem Reset: Not Supported 00:23:55.828 Command Sets Supported 00:23:55.828 NVM Command Set: Supported 00:23:55.828 Boot Partition: Not Supported 00:23:55.828 Memory Page Size Minimum: 4096 bytes 00:23:55.828 Memory Page Size Maximum: 4096 bytes 00:23:55.828 Persistent Memory Region: Not Supported 00:23:55.828 Optional Asynchronous Events Supported 00:23:55.828 Namespace Attribute Notices: Supported 00:23:55.828 Firmware Activation Notices: Not Supported 00:23:55.828 ANA Change Notices: Not Supported 00:23:55.828 PLE Aggregate Log Change Notices: Not Supported 00:23:55.828 LBA Status Info Alert Notices: Not Supported 00:23:55.828 EGE Aggregate Log Change Notices: Not Supported 00:23:55.828 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.828 Zone Descriptor Change Notices: Not Supported 00:23:55.828 Discovery Log Change Notices: Not Supported 00:23:55.828 Controller Attributes 00:23:55.828 128-bit Host Identifier: Supported 00:23:55.828 Non-Operational Permissive Mode: Not Supported 00:23:55.828 NVM Sets: Not Supported 00:23:55.828 Read Recovery Levels: Not Supported 00:23:55.828 Endurance Groups: Not Supported 00:23:55.828 Predictable Latency Mode: Not Supported 00:23:55.828 Traffic Based Keep ALive: Not Supported 00:23:55.828 Namespace Granularity: Not Supported 00:23:55.828 SQ Associations: Not Supported 00:23:55.828 UUID List: Not Supported 00:23:55.828 Multi-Domain Subsystem: Not Supported 00:23:55.828 Fixed Capacity Management: Not Supported 00:23:55.828 Variable Capacity Management: Not Supported 00:23:55.828 Delete Endurance Group: Not Supported 00:23:55.828 Delete NVM Set: Not Supported 00:23:55.828 Extended LBA Formats Supported: Not Supported 00:23:55.828 Flexible Data Placement Supported: Not Supported 00:23:55.828 00:23:55.828 Controller Memory Buffer Support 00:23:55.828 ================================ 00:23:55.828 Supported: No 00:23:55.828 00:23:55.828 Persistent Memory Region Support 00:23:55.828 ================================ 00:23:55.828 Supported: No 00:23:55.828 00:23:55.828 Admin Command Set Attributes 00:23:55.828 ============================ 00:23:55.828 Security Send/Receive: Not Supported 00:23:55.829 Format NVM: Not Supported 00:23:55.829 Firmware Activate/Download: Not Supported 00:23:55.829 Namespace Management: Not Supported 00:23:55.829 Device Self-Test: Not Supported 00:23:55.829 Directives: Not Supported 00:23:55.829 NVMe-MI: Not Supported 00:23:55.829 Virtualization Management: Not Supported 00:23:55.829 Doorbell Buffer Config: Not Supported 00:23:55.829 Get LBA Status Capability: Not Supported 00:23:55.829 Command & Feature Lockdown Capability: Not Supported 00:23:55.829 Abort Command Limit: 4 00:23:55.829 Async Event Request Limit: 4 00:23:55.829 Number of Firmware Slots: N/A 00:23:55.829 Firmware Slot 1 Read-Only: N/A 00:23:55.829 Firmware Activation Without Reset: N/A 00:23:55.829 Multiple Update Detection Support: N/A 00:23:55.829 Firmware Update Granularity: No Information Provided 00:23:55.829 Per-Namespace SMART Log: No 00:23:55.829 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.829 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:55.829 Command Effects Log Page: Supported 00:23:55.829 Get Log Page Extended Data: Supported 00:23:55.829 Telemetry Log Pages: Not Supported 00:23:55.829 Persistent Event Log Pages: Not Supported 00:23:55.829 Supported Log Pages Log Page: May Support 00:23:55.829 Commands Supported & Effects Log Page: Not Supported 00:23:55.829 Feature Identifiers & Effects Log Page:May Support 00:23:55.829 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.829 Data Area 4 for Telemetry Log: Not Supported 00:23:55.829 Error Log Page Entries Supported: 128 00:23:55.829 Keep Alive: Supported 00:23:55.829 Keep Alive Granularity: 10000 ms 00:23:55.829 00:23:55.829 NVM Command Set Attributes 00:23:55.829 ========================== 00:23:55.829 Submission Queue Entry Size 00:23:55.829 Max: 64 00:23:55.829 Min: 64 00:23:55.829 Completion Queue Entry Size 00:23:55.829 Max: 16 00:23:55.829 Min: 16 00:23:55.829 Number of Namespaces: 32 00:23:55.829 Compare Command: Supported 00:23:55.829 Write Uncorrectable Command: Not Supported 00:23:55.829 Dataset Management Command: Supported 00:23:55.829 Write Zeroes Command: Supported 00:23:55.829 Set Features Save Field: Not Supported 00:23:55.829 Reservations: Supported 00:23:55.829 Timestamp: Not Supported 00:23:55.829 Copy: Supported 00:23:55.829 Volatile Write Cache: Present 00:23:55.829 Atomic Write Unit (Normal): 1 00:23:55.829 Atomic Write Unit (PFail): 1 00:23:55.829 Atomic Compare & Write Unit: 1 00:23:55.829 Fused Compare & Write: Supported 00:23:55.829 Scatter-Gather List 00:23:55.829 SGL Command Set: Supported 00:23:55.829 SGL Keyed: Supported 00:23:55.829 SGL Bit Bucket Descriptor: Not Supported 00:23:55.829 SGL Metadata Pointer: Not Supported 00:23:55.829 Oversized SGL: Not Supported 00:23:55.829 SGL Metadata Address: Not Supported 00:23:55.829 SGL Offset: Supported 00:23:55.829 Transport SGL Data Block: Not Supported 00:23:55.829 Replay Protected Memory Block: Not Supported 00:23:55.829 00:23:55.829 Firmware Slot Information 00:23:55.829 ========================= 00:23:55.829 Active slot: 1 00:23:55.829 Slot 1 Firmware Revision: 24.09 00:23:55.829 00:23:55.829 00:23:55.829 Commands Supported and Effects 00:23:55.829 ============================== 00:23:55.829 Admin Commands 00:23:55.829 -------------- 00:23:55.829 Get Log Page (02h): Supported 00:23:55.829 Identify (06h): Supported 00:23:55.829 Abort (08h): Supported 00:23:55.829 Set Features (09h): Supported 00:23:55.829 Get Features (0Ah): Supported 00:23:55.829 Asynchronous Event Request (0Ch): Supported 00:23:55.829 Keep Alive (18h): Supported 00:23:55.829 I/O Commands 00:23:55.829 ------------ 00:23:55.829 Flush (00h): Supported LBA-Change 00:23:55.829 Write (01h): Supported LBA-Change 00:23:55.829 Read (02h): Supported 00:23:55.829 Compare (05h): Supported 00:23:55.829 Write Zeroes (08h): Supported LBA-Change 00:23:55.829 Dataset Management (09h): Supported LBA-Change 00:23:55.829 Copy (19h): Supported LBA-Change 00:23:55.829 Unknown (79h): Supported LBA-Change 00:23:55.829 Unknown (7Ah): Supported 00:23:55.829 00:23:55.829 Error Log 00:23:55.829 ========= 00:23:55.829 00:23:55.829 Arbitration 00:23:55.829 =========== 00:23:55.829 Arbitration Burst: 1 00:23:55.829 00:23:55.829 Power Management 00:23:55.829 ================ 00:23:55.829 Number of Power States: 1 00:23:55.829 Current Power State: Power State #0 00:23:55.829 Power State #0: 00:23:55.829 Max Power: 0.00 W 00:23:55.829 Non-Operational State: Operational 00:23:55.829 Entry Latency: Not Reported 00:23:55.829 Exit Latency: Not Reported 00:23:55.829 Relative Read Throughput: 0 00:23:55.829 Relative Read Latency: 0 00:23:55.829 Relative Write Throughput: 0 00:23:55.829 Relative Write Latency: 0 00:23:55.829 Idle Power: Not Reported 00:23:55.829 Active Power: Not Reported 00:23:55.829 Non-Operational Permissive Mode: Not Supported 00:23:55.829 00:23:55.829 Health Information 00:23:55.829 ================== 00:23:55.829 Critical Warnings: 00:23:55.829 Available Spare Space: OK 00:23:55.829 Temperature: OK 00:23:55.829 Device Reliability: OK 00:23:55.829 Read Only: No 00:23:55.829 Volatile Memory Backup: OK 00:23:55.829 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:55.829 Temperature Threshold: [2024-06-10 12:12:45.240397] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.240403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1bbaf00) 00:23:55.829 [2024-06-10 12:12:45.240410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.829 [2024-06-10 12:12:45.240423] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26790, cid 7, qid 0 00:23:55.829 [2024-06-10 12:12:45.244487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.829 [2024-06-10 12:12:45.244500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.829 [2024-06-10 12:12:45.244507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244515] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26790) on tqpair=0x1bbaf00 00:23:55.829 [2024-06-10 12:12:45.244549] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:55.829 [2024-06-10 12:12:45.244562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.829 [2024-06-10 12:12:45.244570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.829 [2024-06-10 12:12:45.244577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.829 [2024-06-10 12:12:45.244584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.829 [2024-06-10 12:12:45.244593] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244603] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.829 [2024-06-10 12:12:45.244610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.829 [2024-06-10 12:12:45.244624] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.829 [2024-06-10 12:12:45.244712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.829 [2024-06-10 12:12:45.244719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.829 [2024-06-10 12:12:45.244724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244729] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.829 [2024-06-10 12:12:45.244737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244742] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.829 [2024-06-10 12:12:45.244753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.829 [2024-06-10 12:12:45.244771] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.829 [2024-06-10 12:12:45.244857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.829 [2024-06-10 12:12:45.244864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.829 [2024-06-10 12:12:45.244868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244873] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.829 [2024-06-10 12:12:45.244879] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:55.829 [2024-06-10 12:12:45.244885] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:55.829 [2024-06-10 12:12:45.244895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.829 [2024-06-10 12:12:45.244904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.829 [2024-06-10 12:12:45.244911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.829 [2024-06-10 12:12:45.244922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.829 [2024-06-10 12:12:45.244991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.829 [2024-06-10 12:12:45.244998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.829 [2024-06-10 12:12:45.245003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245009] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245021] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245027] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245179] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245298] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245364] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245375] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245400] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245417] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245506] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245516] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245610] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245617] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245621] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245636] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245730] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245746] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245756] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245761] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245851] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245858] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245863] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245867] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.245878] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245884] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.245896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.245907] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.245976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.245984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.245989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.245994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.246004] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.246009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.246014] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.246021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.246033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.246102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.246110] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.246115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.246120] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.246130] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.246135] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.246140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.830 [2024-06-10 12:12:45.246147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.830 [2024-06-10 12:12:45.246158] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.830 [2024-06-10 12:12:45.246226] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.830 [2024-06-10 12:12:45.246232] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.830 [2024-06-10 12:12:45.246237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.830 [2024-06-10 12:12:45.246241] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.830 [2024-06-10 12:12:45.246252] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246258] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.246283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.246346] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.246352] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.246358] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.246375] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.246402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.246468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.246482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.246487] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246492] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.246502] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246507] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246512] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.246529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.246593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.246599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.246604] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246609] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.246619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.246646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.246715] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.246721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.246726] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246730] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.246741] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246750] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.246768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.246832] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.246839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.246843] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.246859] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.246885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.246951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.246957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.246962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.246977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.246987] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.246993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.247005] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.247073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.247080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.247084] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247089] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.247100] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247109] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.247116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.247127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.247199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.247206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.247210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.247225] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.247242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.247253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.247320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.247327] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.247331] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.247346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247356] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.247362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.247373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.247439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.247445] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.247450] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.247465] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247474] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.247486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.247497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.831 [2024-06-10 12:12:45.247566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.831 [2024-06-10 12:12:45.247573] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.831 [2024-06-10 12:12:45.247577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247582] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.831 [2024-06-10 12:12:45.247592] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247597] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.831 [2024-06-10 12:12:45.247602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.831 [2024-06-10 12:12:45.247608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.831 [2024-06-10 12:12:45.247619] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.247687] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.247694] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.247698] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.247713] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.247730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.247741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.247806] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.247815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.247819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247824] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.247834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247839] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.247851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.247861] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.247926] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.247932] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.247937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.247952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247957] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.247961] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.247968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.247979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.248043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.248049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.248054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248058] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.248069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248078] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.248085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.248096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.248162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.248169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.248173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.248188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248193] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248198] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.248204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.248215] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.248281] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.248288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.248293] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248298] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.248309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248314] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248318] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.248325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.248336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.248402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.248409] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.248413] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248418] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.248428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248433] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.248438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.248444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.248455] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.252490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.252501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.252505] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.252510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.252522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.252527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.252532] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bbaf00) 00:23:55.832 [2024-06-10 12:12:45.252539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.832 [2024-06-10 12:12:45.252552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c26210, cid 3, qid 0 00:23:55.832 [2024-06-10 12:12:45.252621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.832 [2024-06-10 12:12:45.252628] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.832 [2024-06-10 12:12:45.252632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.832 [2024-06-10 12:12:45.252637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c26210) on tqpair=0x1bbaf00 00:23:55.832 [2024-06-10 12:12:45.252646] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:55.832 0 Kelvin (-273 Celsius) 00:23:55.832 Available Spare: 0% 00:23:55.832 Available Spare Threshold: 0% 00:23:55.832 Life Percentage Used: 0% 00:23:55.832 Data Units Read: 0 00:23:55.832 Data Units Written: 0 00:23:55.832 Host Read Commands: 0 00:23:55.832 Host Write Commands: 0 00:23:55.832 Controller Busy Time: 0 minutes 00:23:55.832 Power Cycles: 0 00:23:55.832 Power On Hours: 0 hours 00:23:55.832 Unsafe Shutdowns: 0 00:23:55.832 Unrecoverable Media Errors: 0 00:23:55.832 Lifetime Error Log Entries: 0 00:23:55.832 Warning Temperature Time: 0 minutes 00:23:55.832 Critical Temperature Time: 0 minutes 00:23:55.832 00:23:55.832 Number of Queues 00:23:55.832 ================ 00:23:55.832 Number of I/O Submission Queues: 127 00:23:55.832 Number of I/O Completion Queues: 127 00:23:55.832 00:23:55.832 Active Namespaces 00:23:55.832 ================= 00:23:55.832 Namespace ID:1 00:23:55.832 Error Recovery Timeout: Unlimited 00:23:55.832 Command Set Identifier: NVM (00h) 00:23:55.832 Deallocate: Supported 00:23:55.832 Deallocated/Unwritten Error: Not Supported 00:23:55.832 Deallocated Read Value: Unknown 00:23:55.832 Deallocate in Write Zeroes: Not Supported 00:23:55.832 Deallocated Guard Field: 0xFFFF 00:23:55.832 Flush: Supported 00:23:55.832 Reservation: Supported 00:23:55.832 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.832 Size (in LBAs): 131072 (0GiB) 00:23:55.832 Capacity (in LBAs): 131072 (0GiB) 00:23:55.832 Utilization (in LBAs): 131072 (0GiB) 00:23:55.832 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:55.832 EUI64: ABCDEF0123456789 00:23:55.832 UUID: 288cafcd-2089-4d4e-832c-d2a337d96116 00:23:55.832 Thin Provisioning: Not Supported 00:23:55.832 Per-NS Atomic Units: Yes 00:23:55.832 Atomic Boundary Size (Normal): 0 00:23:55.832 Atomic Boundary Size (PFail): 0 00:23:55.832 Atomic Boundary Offset: 0 00:23:55.832 Maximum Single Source Range Length: 65535 00:23:55.832 Maximum Copy Length: 65535 00:23:55.832 Maximum Source Range Count: 1 00:23:55.832 NGUID/EUI64 Never Reused: No 00:23:55.832 Namespace Write Protected: No 00:23:55.832 Number of LBA Formats: 1 00:23:55.832 Current LBA Format: LBA Format #00 00:23:55.832 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.832 00:23:55.832 12:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:55.832 12:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.832 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.833 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.833 rmmod nvme_tcp 00:23:55.833 rmmod nvme_fabrics 00:23:55.833 rmmod nvme_keyring 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2302452 ']' 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2302452 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 2302452 ']' 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 2302452 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2302452 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2302452' 00:23:56.091 killing process with pid 2302452 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 2302452 00:23:56.091 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 2302452 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.350 12:12:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.253 12:12:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.253 00:23:58.253 real 0m9.972s 00:23:58.253 user 0m7.504s 00:23:58.253 sys 0m5.174s 00:23:58.253 12:12:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:58.253 12:12:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:58.253 ************************************ 00:23:58.253 END TEST nvmf_identify 00:23:58.253 ************************************ 00:23:58.253 12:12:47 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:58.253 12:12:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:58.253 12:12:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:58.253 12:12:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.253 ************************************ 00:23:58.253 START TEST nvmf_perf 00:23:58.253 ************************************ 00:23:58.253 12:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:58.512 * Looking for test storage... 00:23:58.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.512 12:12:47 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.513 12:12:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:05.077 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:05.077 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.077 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:05.078 Found net devices under 0000:af:00.0: cvl_0_0 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:05.078 Found net devices under 0000:af:00.1: cvl_0_1 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.078 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:24:05.337 00:24:05.337 --- 10.0.0.2 ping statistics --- 00:24:05.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.337 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:05.337 00:24:05.337 --- 10.0.0.1 ping statistics --- 00:24:05.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.337 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2306396 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2306396 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 2306396 ']' 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:05.337 12:12:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:05.337 [2024-06-10 12:12:54.785645] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:24:05.337 [2024-06-10 12:12:54.785693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.337 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.596 [2024-06-10 12:12:54.857543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.596 [2024-06-10 12:12:54.928684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.596 [2024-06-10 12:12:54.928728] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.596 [2024-06-10 12:12:54.928737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.596 [2024-06-10 12:12:54.928746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.596 [2024-06-10 12:12:54.928769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.596 [2024-06-10 12:12:54.928818] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.596 [2024-06-10 12:12:54.928911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.596 [2024-06-10 12:12:54.929001] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.596 [2024-06-10 12:12:54.929003] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:06.161 12:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:09.441 12:12:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:09.441 12:12:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:09.441 12:12:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:09.441 12:12:58 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:09.699 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:09.699 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:09.699 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:09.699 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:09.699 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.699 [2024-06-10 12:12:59.208887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.957 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.957 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:09.957 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.214 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:10.214 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:10.472 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.472 [2024-06-10 12:12:59.955584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.472 12:12:59 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:10.730 12:13:00 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:10.730 12:13:00 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:10.730 12:13:00 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:10.730 12:13:00 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:12.131 Initializing NVMe Controllers 00:24:12.131 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:12.131 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:12.131 Initialization complete. Launching workers. 00:24:12.131 ======================================================== 00:24:12.131 Latency(us) 00:24:12.131 Device Information : IOPS MiB/s Average min max 00:24:12.131 PCIE (0000:d8:00.0) NSID 1 from core 0: 102032.06 398.56 313.29 36.67 7544.95 00:24:12.131 ======================================================== 00:24:12.131 Total : 102032.06 398.56 313.29 36.67 7544.95 00:24:12.131 00:24:12.131 12:13:01 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.131 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.503 Initializing NVMe Controllers 00:24:13.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:13.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:13.503 Initialization complete. Launching workers. 00:24:13.503 ======================================================== 00:24:13.503 Latency(us) 00:24:13.503 Device Information : IOPS MiB/s Average min max 00:24:13.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.00 0.35 11479.73 120.92 44792.24 00:24:13.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19045.52 7966.10 47885.44 00:24:13.503 ======================================================== 00:24:13.503 Total : 143.00 0.56 14336.74 120.92 47885.44 00:24:13.503 00:24:13.503 12:13:02 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.503 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.877 Initializing NVMe Controllers 00:24:14.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:14.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:14.877 Initialization complete. Launching workers. 00:24:14.877 ======================================================== 00:24:14.877 Latency(us) 00:24:14.877 Device Information : IOPS MiB/s Average min max 00:24:14.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11760.99 45.94 2722.64 416.59 6228.51 00:24:14.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3805.00 14.86 8467.69 7092.23 15995.39 00:24:14.877 ======================================================== 00:24:14.877 Total : 15565.98 60.80 4126.98 416.59 15995.39 00:24:14.877 00:24:14.877 12:13:04 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:14.877 12:13:04 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:14.877 12:13:04 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.877 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.406 Initializing NVMe Controllers 00:24:17.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.406 Controller IO queue size 128, less than required. 00:24:17.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.406 Controller IO queue size 128, less than required. 00:24:17.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.406 Initialization complete. Launching workers. 00:24:17.406 ======================================================== 00:24:17.406 Latency(us) 00:24:17.406 Device Information : IOPS MiB/s Average min max 00:24:17.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1684.99 421.25 77356.32 40404.76 103096.31 00:24:17.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.00 151.50 215388.76 62677.03 342779.48 00:24:17.406 ======================================================== 00:24:17.406 Total : 2290.99 572.75 113867.74 40404.76 342779.48 00:24:17.406 00:24:17.406 12:13:06 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:17.406 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.406 No valid NVMe controllers or AIO or URING devices found 00:24:17.406 Initializing NVMe Controllers 00:24:17.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.406 Controller IO queue size 128, less than required. 00:24:17.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.406 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:17.406 Controller IO queue size 128, less than required. 00:24:17.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.406 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:17.406 WARNING: Some requested NVMe devices were skipped 00:24:17.406 12:13:06 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:17.406 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.934 Initializing NVMe Controllers 00:24:19.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.934 Controller IO queue size 128, less than required. 00:24:19.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:19.934 Controller IO queue size 128, less than required. 00:24:19.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:19.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.934 Initialization complete. Launching workers. 00:24:19.934 00:24:19.934 ==================== 00:24:19.934 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:19.934 TCP transport: 00:24:19.934 polls: 19021 00:24:19.934 idle_polls: 9664 00:24:19.934 sock_completions: 9357 00:24:19.934 nvme_completions: 6691 00:24:19.934 submitted_requests: 9970 00:24:19.934 queued_requests: 1 00:24:19.934 00:24:19.934 ==================== 00:24:19.934 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:19.934 TCP transport: 00:24:19.934 polls: 19999 00:24:19.934 idle_polls: 10349 00:24:19.934 sock_completions: 9650 00:24:19.934 nvme_completions: 5931 00:24:19.934 submitted_requests: 8966 00:24:19.934 queued_requests: 1 00:24:19.934 ======================================================== 00:24:19.934 Latency(us) 00:24:19.934 Device Information : IOPS MiB/s Average min max 00:24:19.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1672.40 418.10 78018.45 48624.51 121382.06 00:24:19.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1482.41 370.60 87778.08 35652.22 122152.01 00:24:19.934 ======================================================== 00:24:19.934 Total : 3154.81 788.70 82604.39 35652.22 122152.01 00:24:19.934 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.934 rmmod nvme_tcp 00:24:19.934 rmmod nvme_fabrics 00:24:19.934 rmmod nvme_keyring 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2306396 ']' 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2306396 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 2306396 ']' 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 2306396 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:19.934 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2306396 00:24:20.192 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:20.192 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:20.192 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2306396' 00:24:20.192 killing process with pid 2306396 00:24:20.192 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 2306396 00:24:20.192 12:13:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 2306396 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.090 12:13:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.624 12:13:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.624 00:24:24.624 real 0m25.853s 00:24:24.624 user 1m6.097s 00:24:24.624 sys 0m9.114s 00:24:24.624 12:13:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:24.624 12:13:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:24.624 ************************************ 00:24:24.624 END TEST nvmf_perf 00:24:24.624 ************************************ 00:24:24.624 12:13:13 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:24.624 12:13:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:24.624 12:13:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:24.624 12:13:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.624 ************************************ 00:24:24.624 START TEST nvmf_fio_host 00:24:24.624 ************************************ 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:24.624 * Looking for test storage... 00:24:24.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.624 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.625 12:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:31.192 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:31.192 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:31.192 Found net devices under 0000:af:00.0: cvl_0_0 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:31.192 Found net devices under 0000:af:00.1: cvl_0_1 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:24:31.192 00:24:31.192 --- 10.0.0.2 ping statistics --- 00:24:31.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.192 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:24:31.192 00:24:31.192 --- 10.0.0.1 ping statistics --- 00:24:31.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.192 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:31.192 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2312585 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2312585 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 2312585 ']' 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.193 12:13:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.193 [2024-06-10 12:13:19.929693] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:24:31.193 [2024-06-10 12:13:19.929749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.193 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.193 [2024-06-10 12:13:20.005216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.193 [2024-06-10 12:13:20.090683] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.193 [2024-06-10 12:13:20.090724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.193 [2024-06-10 12:13:20.090734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.193 [2024-06-10 12:13:20.090742] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.193 [2024-06-10 12:13:20.090766] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.193 [2024-06-10 12:13:20.090816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.193 [2024-06-10 12:13:20.090834] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.193 [2024-06-10 12:13:20.090936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.193 [2024-06-10 12:13:20.090938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.450 12:13:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:31.450 12:13:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:24:31.450 12:13:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:31.450 [2024-06-10 12:13:20.895671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.450 12:13:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:31.450 12:13:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:31.450 12:13:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.708 12:13:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:31.708 Malloc1 00:24:31.708 12:13:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.965 12:13:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:32.223 12:13:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.223 [2024-06-10 12:13:21.672749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.223 12:13:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:32.481 12:13:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.738 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:32.738 fio-3.35 00:24:32.738 Starting 1 thread 00:24:32.995 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.524 [2024-06-10 12:13:24.500720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361030 is same with the state(5) to be set 00:24:35.524 [2024-06-10 12:13:24.500777] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361030 is same with the state(5) to be set 00:24:35.524 [2024-06-10 12:13:24.500788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361030 is same with the state(5) to be set 00:24:35.524 [2024-06-10 12:13:24.500797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361030 is same with the state(5) to be set 00:24:35.524 [2024-06-10 12:13:24.500806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361030 is same with the state(5) to be set 00:24:35.524 00:24:35.524 test: (groupid=0, jobs=1): err= 0: pid=2313267: Mon Jun 10 12:13:24 2024 00:24:35.524 read: IOPS=12.5k, BW=48.8MiB/s (51.2MB/s)(97.9MiB/2005msec) 00:24:35.524 slat (nsec): min=1520, max=239287, avg=1656.29, stdev=2108.56 00:24:35.524 clat (usec): min=3220, max=9787, avg=5652.80, stdev=413.95 00:24:35.524 lat (usec): min=3253, max=9788, avg=5654.46, stdev=413.86 00:24:35.524 clat percentiles (usec): 00:24:35.524 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:24:35.524 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5735], 00:24:35.524 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6128], 95.00th=[ 6259], 00:24:35.524 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7898], 99.95th=[ 9241], 00:24:35.524 | 99.99th=[ 9765] 00:24:35.524 bw ( KiB/s): min=48864, max=50688, per=99.95%, avg=49954.00, stdev=772.50, samples=4 00:24:35.524 iops : min=12216, max=12672, avg=12488.50, stdev=193.12, samples=4 00:24:35.524 write: IOPS=12.5k, BW=48.8MiB/s (51.1MB/s)(97.8MiB/2005msec); 0 zone resets 00:24:35.524 slat (nsec): min=1568, max=225431, avg=1734.89, stdev=1592.29 00:24:35.524 clat (usec): min=2455, max=8758, avg=4550.05, stdev=339.59 00:24:35.524 lat (usec): min=2470, max=8760, avg=4551.79, stdev=339.54 00:24:35.524 clat percentiles (usec): 00:24:35.524 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:24:35.524 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:24:35.524 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5080], 00:24:35.524 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6521], 99.95th=[ 7504], 00:24:35.524 | 99.99th=[ 8717] 00:24:35.524 bw ( KiB/s): min=49488, max=50408, per=100.00%, avg=49956.00, stdev=376.06, samples=4 00:24:35.524 iops : min=12372, max=12602, avg=12489.00, stdev=94.01, samples=4 00:24:35.524 lat (msec) : 4=2.27%, 10=97.73% 00:24:35.524 cpu : usr=68.66%, sys=29.09%, ctx=101, majf=0, minf=5 00:24:35.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:35.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:35.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:35.524 issued rwts: total=25051,25031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:35.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:35.524 00:24:35.524 Run status group 0 (all jobs): 00:24:35.524 READ: bw=48.8MiB/s (51.2MB/s), 48.8MiB/s-48.8MiB/s (51.2MB/s-51.2MB/s), io=97.9MiB (103MB), run=2005-2005msec 00:24:35.524 WRITE: bw=48.8MiB/s (51.1MB/s), 48.8MiB/s-48.8MiB/s (51.1MB/s-51.1MB/s), io=97.8MiB (103MB), run=2005-2005msec 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:35.524 12:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:35.524 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:35.524 fio-3.35 00:24:35.524 Starting 1 thread 00:24:35.524 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.054 00:24:38.054 test: (groupid=0, jobs=1): err= 0: pid=2313879: Mon Jun 10 12:13:27 2024 00:24:38.054 read: IOPS=11.2k, BW=175MiB/s (183MB/s)(350MiB/2006msec) 00:24:38.054 slat (nsec): min=2463, max=79822, avg=2665.09, stdev=1097.21 00:24:38.054 clat (usec): min=1823, max=49824, avg=6767.77, stdev=3400.82 00:24:38.054 lat (usec): min=1826, max=49827, avg=6770.44, stdev=3400.86 00:24:38.054 clat percentiles (usec): 00:24:38.054 | 1.00th=[ 3359], 5.00th=[ 4080], 10.00th=[ 4555], 20.00th=[ 5145], 00:24:38.054 | 30.00th=[ 5604], 40.00th=[ 5997], 50.00th=[ 6521], 60.00th=[ 6980], 00:24:38.054 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[ 8717], 95.00th=[ 9634], 00:24:38.054 | 99.00th=[11731], 99.50th=[43254], 99.90th=[48497], 99.95th=[49021], 00:24:38.054 | 99.99th=[49546] 00:24:38.054 bw ( KiB/s): min=82464, max=97280, per=50.33%, avg=90032.00, stdev=7640.10, samples=4 00:24:38.054 iops : min= 5154, max= 6080, avg=5627.00, stdev=477.51, samples=4 00:24:38.054 write: IOPS=6678, BW=104MiB/s (109MB/s)(184MiB/1760msec); 0 zone resets 00:24:38.054 slat (usec): min=28, max=252, avg=29.85, stdev= 5.37 00:24:38.054 clat (usec): min=3834, max=14192, avg=8084.73, stdev=1448.60 00:24:38.054 lat (usec): min=3863, max=14222, avg=8114.58, stdev=1449.34 00:24:38.054 clat percentiles (usec): 00:24:38.054 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 6783], 00:24:38.054 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8291], 00:24:38.054 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10159], 95.00th=[10814], 00:24:38.054 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13042], 99.95th=[13173], 00:24:38.054 | 99.99th=[14091] 00:24:38.054 bw ( KiB/s): min=87040, max=101376, per=87.73%, avg=93744.00, stdev=7273.34, samples=4 00:24:38.054 iops : min= 5440, max= 6336, avg=5859.00, stdev=454.58, samples=4 00:24:38.054 lat (msec) : 2=0.01%, 4=2.74%, 10=90.77%, 20=6.11%, 50=0.37% 00:24:38.054 cpu : usr=83.94%, sys=15.06%, ctx=42, majf=0, minf=2 00:24:38.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:38.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:38.054 issued rwts: total=22427,11754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:38.054 00:24:38.054 Run status group 0 (all jobs): 00:24:38.054 READ: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=350MiB (367MB), run=2006-2006msec 00:24:38.054 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=184MiB (193MB), run=1760-1760msec 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.054 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.054 rmmod nvme_tcp 00:24:38.313 rmmod nvme_fabrics 00:24:38.313 rmmod nvme_keyring 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2312585 ']' 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2312585 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 2312585 ']' 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 2312585 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2312585 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2312585' 00:24:38.314 killing process with pid 2312585 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 2312585 00:24:38.314 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 2312585 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.572 12:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.477 12:13:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.477 00:24:40.477 real 0m16.265s 00:24:40.477 user 0m52.387s 00:24:40.477 sys 0m7.020s 00:24:40.477 12:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:40.477 12:13:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.477 ************************************ 00:24:40.477 END TEST nvmf_fio_host 00:24:40.477 ************************************ 00:24:40.736 12:13:30 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:40.736 12:13:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:40.736 12:13:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:40.736 12:13:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 ************************************ 00:24:40.736 START TEST nvmf_failover 00:24:40.736 ************************************ 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:40.736 * Looking for test storage... 00:24:40.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.736 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.737 12:13:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:47.346 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:47.346 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:47.346 Found net devices under 0000:af:00.0: cvl_0_0 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.346 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:47.347 Found net devices under 0000:af:00.1: cvl_0_1 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:47.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:24:47.347 00:24:47.347 --- 10.0.0.2 ping statistics --- 00:24:47.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.347 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:47.347 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:24:47.606 00:24:47.606 --- 10.0.0.1 ping statistics --- 00:24:47.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.606 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2317883 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2317883 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2317883 ']' 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:47.606 12:13:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 [2024-06-10 12:13:36.955062] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:24:47.606 [2024-06-10 12:13:36.955113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.606 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.606 [2024-06-10 12:13:37.029790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.606 [2024-06-10 12:13:37.104375] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.606 [2024-06-10 12:13:37.104412] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.606 [2024-06-10 12:13:37.104425] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.606 [2024-06-10 12:13:37.104433] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.606 [2024-06-10 12:13:37.104441] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.606 [2024-06-10 12:13:37.104544] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.606 [2024-06-10 12:13:37.104636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.606 [2024-06-10 12:13:37.104638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:48.540 [2024-06-10 12:13:37.968687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.540 12:13:37 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:48.797 Malloc0 00:24:48.797 12:13:38 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.054 12:13:38 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.054 12:13:38 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.312 [2024-06-10 12:13:38.709766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.312 12:13:38 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:49.569 [2024-06-10 12:13:38.882224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:49.569 12:13:38 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.569 [2024-06-10 12:13:39.050769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2318189 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2318189 /var/tmp/bdevperf.sock 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2318189 ']' 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:49.569 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.503 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:50.503 12:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:50.503 12:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.761 NVMe0n1 00:24:51.018 12:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.275 00:24:51.275 12:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.275 12:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2318453 00:24:51.275 12:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:52.207 12:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.465 [2024-06-10 12:13:41.740035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.465 [2024-06-10 12:13:41.740088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.465 [2024-06-10 12:13:41.740098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740107] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740240] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740257] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740348] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740356] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740382] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740408] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740417] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740425] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740468] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740490] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740507] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740515] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740565] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740651] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 [2024-06-10 12:13:41.740813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3f90 is same with the state(5) to be set 00:24:52.466 12:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:55.747 12:13:44 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.747 00:24:55.747 12:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:55.747 [2024-06-10 12:13:45.250597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250655] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250707] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.747 [2024-06-10 12:13:45.250804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250821] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250911] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.250996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251223] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251241] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251339] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:55.748 [2024-06-10 12:13:45.251355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5540 is same with the state(5) to be set 00:24:56.005 12:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:59.283 12:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.283 [2024-06-10 12:13:48.444813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.283 12:13:48 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:00.215 12:13:49 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:00.215 [2024-06-10 12:13:49.643030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.215 [2024-06-10 12:13:49.643193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643324] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643432] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643521] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643546] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643554] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643572] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643623] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643632] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643675] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643708] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643782] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643791] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643800] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 [2024-06-10 12:13:49.643817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c40 is same with the state(5) to be set 00:25:00.216 12:13:49 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2318453 00:25:06.780 0 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2318189 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2318189 ']' 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2318189 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2318189 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2318189' 00:25:06.780 killing process with pid 2318189 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2318189 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2318189 00:25:06.780 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.780 [2024-06-10 12:13:39.101276] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:25:06.780 [2024-06-10 12:13:39.101330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318189 ] 00:25:06.780 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.780 [2024-06-10 12:13:39.173008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.780 [2024-06-10 12:13:39.245281] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.780 Running I/O for 15 seconds... 00:25:06.780 [2024-06-10 12:13:41.741957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.741994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.780 [2024-06-10 12:13:41.742419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.780 [2024-06-10 12:13:41.742428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.781 [2024-06-10 12:13:41.742470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.781 [2024-06-10 12:13:41.742495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.742985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.742994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.781 [2024-06-10 12:13:41.743221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.781 [2024-06-10 12:13:41.743231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.782 [2024-06-10 12:13:41.743250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.782 [2024-06-10 12:13:41.743270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.743982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.743992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.744002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.744011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.744022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.782 [2024-06-10 12:13:41.744031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.782 [2024-06-10 12:13:41.744041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.783 [2024-06-10 12:13:41.744229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101784 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101792 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101800 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101808 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101816 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101824 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101832 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101840 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101848 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101856 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101864 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101872 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.744687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.744694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.744702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101888 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.744711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.755968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.755982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.755994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101896 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.756008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.756021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.783 [2024-06-10 12:13:41.756033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.783 [2024-06-10 12:13:41.756044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101904 len:8 PRP1 0x0 PRP2 0x0 00:25:06.783 [2024-06-10 12:13:41.756056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.756107] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c6e940 was disconnected and freed. reset controller. 00:25:06.783 [2024-06-10 12:13:41.756122] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:06.783 [2024-06-10 12:13:41.756151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.783 [2024-06-10 12:13:41.756165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.756178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.783 [2024-06-10 12:13:41.756190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.756203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.783 [2024-06-10 12:13:41.756215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.783 [2024-06-10 12:13:41.756228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.784 [2024-06-10 12:13:41.756239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:41.756252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.784 [2024-06-10 12:13:41.756296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c503a0 (9): Bad file descriptor 00:25:06.784 [2024-06-10 12:13:41.759888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.784 [2024-06-10 12:13:41.876851] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.784 [2024-06-10 12:13:45.252561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.252983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.252993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.253002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.253022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.253041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.784 [2024-06-10 12:13:45.253060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.784 [2024-06-10 12:13:45.253080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.784 [2024-06-10 12:13:45.253099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.784 [2024-06-10 12:13:45.253119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.784 [2024-06-10 12:13:45.253138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.784 [2024-06-10 12:13:45.253158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.784 [2024-06-10 12:13:45.253168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.784 [2024-06-10 12:13:45.253178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.785 [2024-06-10 12:13:45.253374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.785 [2024-06-10 12:13:45.253867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.785 [2024-06-10 12:13:45.253877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.253886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.253897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.253907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.253917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.253938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.253947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.253958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.253967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.253977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.253987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.253998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.786 [2024-06-10 12:13:45.254456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.786 [2024-06-10 12:13:45.254492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:25:06.786 [2024-06-10 12:13:45.254502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.786 [2024-06-10 12:13:45.254522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.786 [2024-06-10 12:13:45.254530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66728 len:8 PRP1 0x0 PRP2 0x0 00:25:06.786 [2024-06-10 12:13:45.254541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.786 [2024-06-10 12:13:45.254558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.786 [2024-06-10 12:13:45.254566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66736 len:8 PRP1 0x0 PRP2 0x0 00:25:06.786 [2024-06-10 12:13:45.254575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.786 [2024-06-10 12:13:45.254592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.786 [2024-06-10 12:13:45.254599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66744 len:8 PRP1 0x0 PRP2 0x0 00:25:06.786 [2024-06-10 12:13:45.254609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.786 [2024-06-10 12:13:45.254619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.786 [2024-06-10 12:13:45.254627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66752 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66760 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66768 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66776 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66784 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66792 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66800 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66808 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66816 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66824 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66832 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.254971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.254980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.254988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.254995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66840 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.255027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66848 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.255062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66856 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.255096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66864 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.255128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66872 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.255160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66880 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.255192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66888 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.255202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.255211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.255222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.268222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66896 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.268235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.268245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.268253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.268261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66904 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.268271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.268279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.268286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.268294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66912 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.268303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.268312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.268319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.268343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66920 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.268355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.268367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.268377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.268387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66928 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.268399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.268411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.787 [2024-06-10 12:13:45.268420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.787 [2024-06-10 12:13:45.268430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66936 len:8 PRP1 0x0 PRP2 0x0 00:25:06.787 [2024-06-10 12:13:45.268442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.787 [2024-06-10 12:13:45.268454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.788 [2024-06-10 12:13:45.268463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.788 [2024-06-10 12:13:45.268473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66944 len:8 PRP1 0x0 PRP2 0x0 00:25:06.788 [2024-06-10 12:13:45.268490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.788 [2024-06-10 12:13:45.268511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.788 [2024-06-10 12:13:45.268522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66952 len:8 PRP1 0x0 PRP2 0x0 00:25:06.788 [2024-06-10 12:13:45.268533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.788 [2024-06-10 12:13:45.268558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.788 [2024-06-10 12:13:45.268568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66960 len:8 PRP1 0x0 PRP2 0x0 00:25:06.788 [2024-06-10 12:13:45.268579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.788 [2024-06-10 12:13:45.268601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.788 [2024-06-10 12:13:45.268610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66968 len:8 PRP1 0x0 PRP2 0x0 00:25:06.788 [2024-06-10 12:13:45.268622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.788 [2024-06-10 12:13:45.268644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.788 [2024-06-10 12:13:45.268655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66976 len:8 PRP1 0x0 PRP2 0x0 00:25:06.788 [2024-06-10 12:13:45.268667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268715] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c4a6a0 was disconnected and freed. reset controller. 00:25:06.788 [2024-06-10 12:13:45.268729] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:06.788 [2024-06-10 12:13:45.268757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.788 [2024-06-10 12:13:45.268770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.788 [2024-06-10 12:13:45.268795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.788 [2024-06-10 12:13:45.268819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.788 [2024-06-10 12:13:45.268843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:45.268855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.788 [2024-06-10 12:13:45.268893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c503a0 (9): Bad file descriptor 00:25:06.788 [2024-06-10 12:13:45.272462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.788 [2024-06-10 12:13:45.433980] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.788 [2024-06-10 12:13:49.644955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.644992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.788 [2024-06-10 12:13:49.645396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.788 [2024-06-10 12:13:49.645406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.789 [2024-06-10 12:13:49.645457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.789 [2024-06-10 12:13:49.645482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.645983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.645992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.646003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.646012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.646024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.646035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.646046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.646055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.646065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.646074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.646085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.646094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.789 [2024-06-10 12:13:49.646104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.789 [2024-06-10 12:13:49.646113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.790 [2024-06-10 12:13:49.646268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.790 [2024-06-10 12:13:49.646874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.790 [2024-06-10 12:13:49.646884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.646893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.646903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.646913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.646923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.646932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.646942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.646951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.646961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.646970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.646980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.646989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.791 [2024-06-10 12:13:49.647204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124088 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124096 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124104 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124112 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124120 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124128 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124136 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124144 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124152 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124160 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124168 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124176 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124184 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.647659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.791 [2024-06-10 12:13:49.647666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.791 [2024-06-10 12:13:49.647673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124192 len:8 PRP1 0x0 PRP2 0x0 00:25:06.791 [2024-06-10 12:13:49.647682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.791 [2024-06-10 12:13:49.660777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.792 [2024-06-10 12:13:49.660792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.792 [2024-06-10 12:13:49.660803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124200 len:8 PRP1 0x0 PRP2 0x0 00:25:06.792 [2024-06-10 12:13:49.660814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.792 [2024-06-10 12:13:49.660827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.792 [2024-06-10 12:13:49.660836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.792 [2024-06-10 12:13:49.660846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124208 len:8 PRP1 0x0 PRP2 0x0 00:25:06.792 [2024-06-10 12:13:49.660858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.792 [2024-06-10 12:13:49.660906] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c4c110 was disconnected and freed. reset controller. 00:25:06.792 [2024-06-10 12:13:49.660920] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:06.792 [2024-06-10 12:13:49.660948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.792 [2024-06-10 12:13:49.660961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.792 [2024-06-10 12:13:49.660976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.792 [2024-06-10 12:13:49.660989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.792 [2024-06-10 12:13:49.661001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.792 [2024-06-10 12:13:49.661013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.792 [2024-06-10 12:13:49.661026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.792 [2024-06-10 12:13:49.661038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.792 [2024-06-10 12:13:49.661050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.792 [2024-06-10 12:13:49.661089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c503a0 (9): Bad file descriptor 00:25:06.792 [2024-06-10 12:13:49.664680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.792 [2024-06-10 12:13:49.699396] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.792 00:25:06.792 Latency(us) 00:25:06.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.792 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:06.792 Verification LBA range: start 0x0 length 0x4000 00:25:06.792 NVMe0n1 : 15.01 11286.45 44.09 988.03 0.00 10407.22 407.96 24012.39 00:25:06.792 =================================================================================================================== 00:25:06.792 Total : 11286.45 44.09 988.03 0.00 10407.22 407.96 24012.39 00:25:06.792 Received shutdown signal, test time was about 15.000000 seconds 00:25:06.792 00:25:06.792 Latency(us) 00:25:06.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.792 =================================================================================================================== 00:25:06.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2321110 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2321110 /var/tmp/bdevperf.sock 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2321110 ']' 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:06.792 12:13:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.357 12:13:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:07.357 12:13:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:07.357 12:13:56 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.614 [2024-06-10 12:13:56.978330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.614 12:13:57 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:07.872 [2024-06-10 12:13:57.166790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:07.872 12:13:57 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.130 NVMe0n1 00:25:08.130 12:13:57 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.387 00:25:08.387 12:13:57 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.645 00:25:08.645 12:13:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.645 12:13:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:08.902 12:13:58 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.160 12:13:58 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:12.445 12:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.445 12:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:12.445 12:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2322003 00:25:12.445 12:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.445 12:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2322003 00:25:13.382 0 00:25:13.382 12:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.382 [2024-06-10 12:13:56.004567] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:25:13.382 [2024-06-10 12:13:56.004622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321110 ] 00:25:13.382 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.382 [2024-06-10 12:13:56.073687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.382 [2024-06-10 12:13:56.138926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.382 [2024-06-10 12:13:58.461070] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:13.382 [2024-06-10 12:13:58.461113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.382 [2024-06-10 12:13:58.461127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.382 [2024-06-10 12:13:58.461138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.382 [2024-06-10 12:13:58.461147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.382 [2024-06-10 12:13:58.461157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.382 [2024-06-10 12:13:58.461166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.382 [2024-06-10 12:13:58.461175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.382 [2024-06-10 12:13:58.461184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.382 [2024-06-10 12:13:58.461193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.382 [2024-06-10 12:13:58.461217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.382 [2024-06-10 12:13:58.461233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec3a0 (9): Bad file descriptor 00:25:13.382 [2024-06-10 12:13:58.509569] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.382 Running I/O for 1 seconds... 00:25:13.382 00:25:13.382 Latency(us) 00:25:13.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.382 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.382 Verification LBA range: start 0x0 length 0x4000 00:25:13.382 NVMe0n1 : 1.01 11586.60 45.26 0.00 0.00 11004.19 2411.72 11219.76 00:25:13.382 =================================================================================================================== 00:25:13.382 Total : 11586.60 45.26 0.00 0.00 11004.19 2411.72 11219.76 00:25:13.382 12:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.382 12:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:13.640 12:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.898 12:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.898 12:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:13.898 12:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.156 12:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2321110 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2321110 ']' 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2321110 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2321110 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2321110' 00:25:17.514 killing process with pid 2321110 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2321110 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2321110 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:17.514 12:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.773 rmmod nvme_tcp 00:25:17.773 rmmod nvme_fabrics 00:25:17.773 rmmod nvme_keyring 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2317883 ']' 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2317883 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2317883 ']' 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2317883 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:17.773 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2317883 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2317883' 00:25:18.033 killing process with pid 2317883 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2317883 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2317883 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.033 12:14:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.571 12:14:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.571 00:25:20.571 real 0m39.528s 00:25:20.571 user 2m2.281s 00:25:20.571 sys 0m9.784s 00:25:20.571 12:14:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:20.571 12:14:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.571 ************************************ 00:25:20.571 END TEST nvmf_failover 00:25:20.571 ************************************ 00:25:20.571 12:14:09 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:20.571 12:14:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:20.571 12:14:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:20.571 12:14:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.571 ************************************ 00:25:20.571 START TEST nvmf_host_discovery 00:25:20.571 ************************************ 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:20.571 * Looking for test storage... 00:25:20.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.571 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.572 12:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.136 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.136 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.136 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.136 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.137 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:27.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:25:27.137 00:25:27.137 --- 10.0.0.2 ping statistics --- 00:25:27.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.137 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:25:27.137 00:25:27.137 --- 10.0.0.1 ping statistics --- 00:25:27.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.137 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:27.137 12:14:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2326440 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2326440 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2326440 ']' 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.137 12:14:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.137 [2024-06-10 12:14:16.078880] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:25:27.137 [2024-06-10 12:14:16.078931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.137 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.137 [2024-06-10 12:14:16.153790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.137 [2024-06-10 12:14:16.228940] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.137 [2024-06-10 12:14:16.228974] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.137 [2024-06-10 12:14:16.228984] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.137 [2024-06-10 12:14:16.228992] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.137 [2024-06-10 12:14:16.228999] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.137 [2024-06-10 12:14:16.229023] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.395 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.654 [2024-06-10 12:14:16.916556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.654 [2024-06-10 12:14:16.928702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.654 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.654 null0 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.655 null1 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2326704 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2326704 /tmp/host.sock 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2326704 ']' 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:27.655 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:27.655 12:14:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.655 [2024-06-10 12:14:17.006529] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:25:27.655 [2024-06-10 12:14:17.006575] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326704 ] 00:25:27.655 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.655 [2024-06-10 12:14:17.075522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.655 [2024-06-10 12:14:17.149821] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.590 12:14:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.590 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.848 [2024-06-10 12:14:18.139887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.848 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:25:28.849 12:14:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:29.418 [2024-06-10 12:14:18.825687] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:29.418 [2024-06-10 12:14:18.825706] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:29.418 [2024-06-10 12:14:18.825719] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.418 [2024-06-10 12:14:18.912973] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:29.676 [2024-06-10 12:14:18.969029] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:29.676 [2024-06-10 12:14:18.969049] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:29.934 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.935 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.193 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.194 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.452 [2024-06-10 12:14:19.868556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.452 [2024-06-10 12:14:19.869422] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:30.452 [2024-06-10 12:14:19.869443] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.452 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.453 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.711 [2024-06-10 12:14:19.997100] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:30.711 12:14:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:30.711 12:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:30.711 12:14:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:30.711 [2024-06-10 12:14:20.101706] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:30.711 [2024-06-10 12:14:20.101728] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:30.711 [2024-06-10 12:14:20.101735] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.647 [2024-06-10 12:14:21.136780] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:31.647 [2024-06-10 12:14:21.136802] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.647 [2024-06-10 12:14:21.140504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.647 [2024-06-10 12:14:21.140524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.647 [2024-06-10 12:14:21.140535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.647 [2024-06-10 12:14:21.140545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.647 [2024-06-10 12:14:21.140555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.647 [2024-06-10 12:14:21.140564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.647 [2024-06-10 12:14:21.140573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.647 [2024-06-10 12:14:21.140583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.647 [2024-06-10 12:14:21.140592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:31.647 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.648 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.648 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.648 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.648 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.648 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.648 [2024-06-10 12:14:21.150517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.648 [2024-06-10 12:14:21.160560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.648 [2024-06-10 12:14:21.160798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.648 [2024-06-10 12:14:21.160815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.648 [2024-06-10 12:14:21.160825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.648 [2024-06-10 12:14:21.160839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.648 [2024-06-10 12:14:21.160860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.648 [2024-06-10 12:14:21.160869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.648 [2024-06-10 12:14:21.160880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.648 [2024-06-10 12:14:21.160892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.906 [2024-06-10 12:14:21.170615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.906 [2024-06-10 12:14:21.170868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.906 [2024-06-10 12:14:21.170882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.906 [2024-06-10 12:14:21.170891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.906 [2024-06-10 12:14:21.170904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.906 [2024-06-10 12:14:21.170923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.906 [2024-06-10 12:14:21.170931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.906 [2024-06-10 12:14:21.170940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.906 [2024-06-10 12:14:21.170951] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.906 [2024-06-10 12:14:21.180667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.906 [2024-06-10 12:14:21.180854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.906 [2024-06-10 12:14:21.180868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.906 [2024-06-10 12:14:21.180877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.906 [2024-06-10 12:14:21.180893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.906 [2024-06-10 12:14:21.180905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.906 [2024-06-10 12:14:21.180913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.906 [2024-06-10 12:14:21.180922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.906 [2024-06-10 12:14:21.180933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.906 [2024-06-10 12:14:21.190720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.906 [2024-06-10 12:14:21.190874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.906 [2024-06-10 12:14:21.190887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.906 [2024-06-10 12:14:21.190897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.906 [2024-06-10 12:14:21.190909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.906 [2024-06-10 12:14:21.190921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.906 [2024-06-10 12:14:21.190929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.906 [2024-06-10 12:14:21.190939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.906 [2024-06-10 12:14:21.190950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.906 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:31.907 [2024-06-10 12:14:21.200773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.907 [2024-06-10 12:14:21.200903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.907 [2024-06-10 12:14:21.200916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.907 [2024-06-10 12:14:21.200925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.907 [2024-06-10 12:14:21.200937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.907 [2024-06-10 12:14:21.200948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.907 [2024-06-10 12:14:21.200957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.907 [2024-06-10 12:14:21.200965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.907 [2024-06-10 12:14:21.200976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.907 [2024-06-10 12:14:21.210826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.907 [2024-06-10 12:14:21.211085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.907 [2024-06-10 12:14:21.211099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.907 [2024-06-10 12:14:21.211109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.907 [2024-06-10 12:14:21.211122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.907 [2024-06-10 12:14:21.211142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.907 [2024-06-10 12:14:21.211150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.907 [2024-06-10 12:14:21.211159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.907 [2024-06-10 12:14:21.211171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.907 [2024-06-10 12:14:21.220882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.907 [2024-06-10 12:14:21.221111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.907 [2024-06-10 12:14:21.221124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d61d70 with addr=10.0.0.2, port=4420 00:25:31.907 [2024-06-10 12:14:21.221133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61d70 is same with the state(5) to be set 00:25:31.907 [2024-06-10 12:14:21.221146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61d70 (9): Bad file descriptor 00:25:31.907 [2024-06-10 12:14:21.221166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.907 [2024-06-10 12:14:21.221174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.907 [2024-06-10 12:14:21.221183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.907 [2024-06-10 12:14:21.221200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.907 [2024-06-10 12:14:21.223191] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:31.907 [2024-06-10 12:14:21.223209] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.907 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:32.166 12:14:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.115 [2024-06-10 12:14:22.545629] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.115 [2024-06-10 12:14:22.545646] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.115 [2024-06-10 12:14:22.545660] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.116 [2024-06-10 12:14:22.631922] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:33.373 [2024-06-10 12:14:22.813650] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.373 [2024-06-10 12:14:22.813676] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.373 request: 00:25:33.373 { 00:25:33.373 "name": "nvme", 00:25:33.373 "trtype": "tcp", 00:25:33.373 "traddr": "10.0.0.2", 00:25:33.373 "adrfam": "ipv4", 00:25:33.373 "trsvcid": "8009", 00:25:33.373 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:33.373 "wait_for_attach": true, 00:25:33.373 "method": "bdev_nvme_start_discovery", 00:25:33.373 "req_id": 1 00:25:33.373 } 00:25:33.373 Got JSON-RPC error response 00:25:33.373 response: 00:25:33.373 { 00:25:33.373 "code": -17, 00:25:33.373 "message": "File exists" 00:25:33.373 } 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:33.373 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.374 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.632 request: 00:25:33.632 { 00:25:33.632 "name": "nvme_second", 00:25:33.632 "trtype": "tcp", 00:25:33.632 "traddr": "10.0.0.2", 00:25:33.632 "adrfam": "ipv4", 00:25:33.632 "trsvcid": "8009", 00:25:33.632 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:33.632 "wait_for_attach": true, 00:25:33.632 "method": "bdev_nvme_start_discovery", 00:25:33.632 "req_id": 1 00:25:33.632 } 00:25:33.632 Got JSON-RPC error response 00:25:33.632 response: 00:25:33.632 { 00:25:33.632 "code": -17, 00:25:33.632 "message": "File exists" 00:25:33.632 } 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.632 12:14:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.632 12:14:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.566 [2024-06-10 12:14:24.065145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-06-10 12:14:24.065172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c480 with addr=10.0.0.2, port=8010 00:25:34.566 [2024-06-10 12:14:24.065186] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:34.566 [2024-06-10 12:14:24.065194] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:34.566 [2024-06-10 12:14:24.065218] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:35.938 [2024-06-10 12:14:25.067600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.938 [2024-06-10 12:14:25.067624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c480 with addr=10.0.0.2, port=8010 00:25:35.938 [2024-06-10 12:14:25.067636] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:35.938 [2024-06-10 12:14:25.067644] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:35.938 [2024-06-10 12:14:25.067651] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:36.871 [2024-06-10 12:14:26.069728] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:36.871 request: 00:25:36.871 { 00:25:36.871 "name": "nvme_second", 00:25:36.871 "trtype": "tcp", 00:25:36.871 "traddr": "10.0.0.2", 00:25:36.871 "adrfam": "ipv4", 00:25:36.871 "trsvcid": "8010", 00:25:36.871 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:36.871 "wait_for_attach": false, 00:25:36.871 "attach_timeout_ms": 3000, 00:25:36.871 "method": "bdev_nvme_start_discovery", 00:25:36.871 "req_id": 1 00:25:36.871 } 00:25:36.871 Got JSON-RPC error response 00:25:36.871 response: 00:25:36.871 { 00:25:36.871 "code": -110, 00:25:36.871 "message": "Connection timed out" 00:25:36.871 } 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2326704 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.871 rmmod nvme_tcp 00:25:36.871 rmmod nvme_fabrics 00:25:36.871 rmmod nvme_keyring 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2326440 ']' 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2326440 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 2326440 ']' 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 2326440 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2326440 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2326440' 00:25:36.871 killing process with pid 2326440 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 2326440 00:25:36.871 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 2326440 00:25:37.142 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.142 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.142 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.142 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.142 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.142 12:14:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.143 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.143 12:14:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.049 12:14:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:39.049 00:25:39.049 real 0m18.835s 00:25:39.049 user 0m22.335s 00:25:39.049 sys 0m6.926s 00:25:39.049 12:14:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:39.049 12:14:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.049 ************************************ 00:25:39.049 END TEST nvmf_host_discovery 00:25:39.049 ************************************ 00:25:39.049 12:14:28 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:39.049 12:14:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:39.049 12:14:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:39.049 12:14:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:39.308 ************************************ 00:25:39.308 START TEST nvmf_host_multipath_status 00:25:39.308 ************************************ 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:39.308 * Looking for test storage... 00:25:39.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:39.308 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:39.309 12:14:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.460 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:47.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:47.461 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:47.461 Found net devices under 0000:af:00.0: cvl_0_0 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:47.461 Found net devices under 0000:af:00.1: cvl_0_1 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:47.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:25:47.461 00:25:47.461 --- 10.0.0.2 ping statistics --- 00:25:47.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.461 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:25:47.461 00:25:47.461 --- 10.0.0.1 ping statistics --- 00:25:47.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.461 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.461 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2332140 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2332140 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2332140 ']' 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:47.462 12:14:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.462 [2024-06-10 12:14:35.835864] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:25:47.462 [2024-06-10 12:14:35.835913] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.462 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.462 [2024-06-10 12:14:35.909581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:47.462 [2024-06-10 12:14:35.980651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.462 [2024-06-10 12:14:35.980694] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.462 [2024-06-10 12:14:35.980703] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.462 [2024-06-10 12:14:35.980711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.462 [2024-06-10 12:14:35.980718] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.462 [2024-06-10 12:14:35.980810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.462 [2024-06-10 12:14:35.980813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2332140 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:47.462 [2024-06-10 12:14:36.816751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.462 12:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:47.756 Malloc0 00:25:47.756 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:47.756 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:48.014 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.272 [2024-06-10 12:14:37.551704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.272 [2024-06-10 12:14:37.720120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2332465 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2332465 /var/tmp/bdevperf.sock 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2332465 ']' 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:48.272 12:14:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.205 12:14:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:49.205 12:14:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:25:49.205 12:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:49.463 12:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:49.720 Nvme0n1 00:25:49.720 12:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:50.285 Nvme0n1 00:25:50.285 12:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:50.285 12:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:52.184 12:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:52.184 12:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:52.442 12:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.700 12:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:53.635 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:53.635 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.635 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.635 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.894 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.894 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.894 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.894 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.153 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.411 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.411 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.411 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.411 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.669 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.669 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.669 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.669 12:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.669 12:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.669 12:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:54.669 12:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.928 12:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.186 12:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:56.120 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:56.120 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.120 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.120 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.378 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.378 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.378 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.378 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.637 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.637 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.637 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.637 12:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.637 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.637 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.637 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.637 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.895 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.895 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.895 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.895 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.153 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.153 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.153 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.153 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.412 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.412 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:57.412 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.412 12:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:57.670 12:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:58.604 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:58.604 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.604 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.604 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.862 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.862 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.862 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.862 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.121 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.378 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.378 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.378 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.378 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.637 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.637 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.637 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.637 12:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.637 12:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.637 12:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:59.637 12:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.895 12:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:00.153 12:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:01.087 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:01.087 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:01.087 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.087 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.344 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.344 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.345 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.345 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.345 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.345 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.603 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.603 12:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.603 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.603 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.603 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.603 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.862 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.862 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.862 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.862 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:02.120 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:02.378 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:02.635 12:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:03.568 12:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:03.568 12:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.568 12:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.568 12:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.827 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.084 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.084 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.084 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.084 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.342 12:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.600 12:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.600 12:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:04.600 12:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:04.858 12:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:05.116 12:14:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:06.049 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:06.049 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:06.049 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.049 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.308 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.568 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.568 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.568 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.568 12:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.828 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.087 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.087 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:07.347 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:07.347 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:07.347 12:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.606 12:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:08.544 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:08.544 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:08.544 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.544 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.894 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.894 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.894 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.894 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.153 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.412 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.412 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:09.412 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.412 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.671 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.671 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.671 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.672 12:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.672 12:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.672 12:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:09.672 12:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.931 12:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:10.191 12:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:11.129 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:11.129 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:11.129 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.129 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.387 12:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.646 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.646 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.646 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.646 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.904 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.904 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.904 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.904 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:12.163 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.422 12:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:12.680 12:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:13.620 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:13.620 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.620 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.620 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.880 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.139 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.139 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.139 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.139 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.397 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.397 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.397 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.397 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.656 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.656 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.656 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.656 12:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.656 12:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.656 12:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:14.656 12:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.915 12:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.173 12:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:16.107 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:16.107 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.107 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.107 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.365 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.622 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.622 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.622 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.622 12:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.879 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2332465 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2332465 ']' 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2332465 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2332465 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2332465' 00:26:17.137 killing process with pid 2332465 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2332465 00:26:17.137 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2332465 00:26:17.415 Connection closed with partial response: 00:26:17.415 00:26:17.415 00:26:17.415 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2332465 00:26:17.415 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.415 [2024-06-10 12:14:37.769279] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:26:17.415 [2024-06-10 12:14:37.769336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332465 ] 00:26:17.415 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.415 [2024-06-10 12:14:37.835550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.415 [2024-06-10 12:14:37.905766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.415 Running I/O for 90 seconds... 00:26:17.415 [2024-06-10 12:14:51.749792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.749833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.749872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.749882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.749898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.749908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.749922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.749932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.749947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.749956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.749971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.749980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.749995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.415 [2024-06-10 12:14:51.750226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.415 [2024-06-10 12:14:51.750241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.750976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.750991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.751001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.416 [2024-06-10 12:14:51.751025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.416 [2024-06-10 12:14:51.751039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.751983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.751998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.417 [2024-06-10 12:14:51.752102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.417 [2024-06-10 12:14:51.752128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.417 [2024-06-10 12:14:51.752407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.417 [2024-06-10 12:14:51.752416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.418 [2024-06-10 12:14:51.752953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.752976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.752991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.418 [2024-06-10 12:14:51.753262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.418 [2024-06-10 12:14:51.753276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.419 [2024-06-10 12:14:51.753310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.753980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.753990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.419 [2024-06-10 12:14:51.754693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.419 [2024-06-10 12:14:51.754702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.754980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.754989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.755156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.755171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.420 [2024-06-10 12:14:51.765713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.420 [2024-06-10 12:14:51.765727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.765736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.765750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.765760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.765774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.765783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.765798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.765807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.765830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.766620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.766644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.766988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.766997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.767020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.767044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.767067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.421 [2024-06-10 12:14:51.767093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.767116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.767139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.767163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.767186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.767210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.421 [2024-06-10 12:14:51.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.421 [2024-06-10 12:14:51.767234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.422 [2024-06-10 12:14:51.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.767892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.767902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.768389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.768403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.768419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.768430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.768455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.768469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.768483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.768498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.422 [2024-06-10 12:14:51.768507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.422 [2024-06-10 12:14:51.768521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.768980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.768994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.423 [2024-06-10 12:14:51.769450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.423 [2024-06-10 12:14:51.769459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.769912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.769921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.424 [2024-06-10 12:14:51.770693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.424 [2024-06-10 12:14:51.770716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.424 [2024-06-10 12:14:51.770841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.424 [2024-06-10 12:14:51.770855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.770865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.770879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.770888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.770902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.770912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.770928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.770937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.770951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.770960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.770974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.770983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.770998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.771173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.771459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.771474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.777579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.777597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.777621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.425 [2024-06-10 12:14:51.777630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.777645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.777654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.777668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.777677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.425 [2024-06-10 12:14:51.777692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.425 [2024-06-10 12:14:51.777702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.777962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.777977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.426 [2024-06-10 12:14:51.777986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.778979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.426 [2024-06-10 12:14:51.779159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.426 [2024-06-10 12:14:51.779168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.779980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.779989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.780003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.780014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.780029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.780038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.780052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.780061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.780076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.780085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.780100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.427 [2024-06-10 12:14:51.780110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.427 [2024-06-10 12:14:51.780125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.780925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.780948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.780986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.780996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.428 [2024-06-10 12:14:51.781399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.428 [2024-06-10 12:14:51.781576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.428 [2024-06-10 12:14:51.781591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.781771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.781981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.781990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.429 [2024-06-10 12:14:51.782133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.429 [2024-06-10 12:14:51.782902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.429 [2024-06-10 12:14:51.782918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.782927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.782942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.782951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.782966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.782990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.430 [2024-06-10 12:14:51.783866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.430 [2024-06-10 12:14:51.783875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.783890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.783899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.783913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.783923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.783938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.783947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.783962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.783972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.783987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.783996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.784977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.784991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.431 [2024-06-10 12:14:51.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.431 [2024-06-10 12:14:51.785073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.431 [2024-06-10 12:14:51.785328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.431 [2024-06-10 12:14:51.785339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.785905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.785979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.785993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.432 [2024-06-10 12:14:51.786244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.432 [2024-06-10 12:14:51.786268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.432 [2024-06-10 12:14:51.786283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.786982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.786992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.433 [2024-06-10 12:14:51.787678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.433 [2024-06-10 12:14:51.787693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.787981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.787990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.788977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.788986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.789001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.434 [2024-06-10 12:14:51.789011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.434 [2024-06-10 12:14:51.789025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.435 [2024-06-10 12:14:51.789687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.435 [2024-06-10 12:14:51.789952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.435 [2024-06-10 12:14:51.789967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.436 [2024-06-10 12:14:51.789976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.789991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.436 [2024-06-10 12:14:51.790000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.436 [2024-06-10 12:14:51.790026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.436 [2024-06-10 12:14:51.790050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.436 [2024-06-10 12:14:51.790418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.790976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.790986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.436 [2024-06-10 12:14:51.791396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.436 [2024-06-10 12:14:51.791411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.791978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.791988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.437 [2024-06-10 12:14:51.792365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.437 [2024-06-10 12:14:51.792374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.792389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.792398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.792413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.792422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.792447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.792462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.792471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.438 [2024-06-10 12:14:51.793329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.438 [2024-06-10 12:14:51.793353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.438 [2024-06-10 12:14:51.793709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.438 [2024-06-10 12:14:51.793718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.793744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.793767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.793792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.793815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.793983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.793997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.794520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.794534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.439 [2024-06-10 12:14:51.794543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.798098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.798110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.798126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.798135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.798150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.798176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.798185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.798201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.439 [2024-06-10 12:14:51.798210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.439 [2024-06-10 12:14:51.798225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.798983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.798998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.440 [2024-06-10 12:14:51.799716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.440 [2024-06-10 12:14:51.799730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.799982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.799998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.800985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.800995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.801011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.441 [2024-06-10 12:14:51.801020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.801035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.441 [2024-06-10 12:14:51.801045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.801060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.801083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.801093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.801107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.801116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.441 [2024-06-10 12:14:51.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.441 [2024-06-10 12:14:51.801140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.442 [2024-06-10 12:14:51.801869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.442 [2024-06-10 12:14:51.801990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.442 [2024-06-10 12:14:51.802005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.443 [2024-06-10 12:14:51.802231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.802976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.802991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.443 [2024-06-10 12:14:51.803449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.443 [2024-06-10 12:14:51.803458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.803987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.803997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.444 [2024-06-10 12:14:51.804864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.444 [2024-06-10 12:14:51.804879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.804888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.804902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.804912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.804927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.804936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.804950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.804962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.804977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.804987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.445 [2024-06-10 12:14:51.805621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.445 [2024-06-10 12:14:51.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.445 [2024-06-10 12:14:51.805831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.805856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.805880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.805904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.805928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.805953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.805976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.805986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.446 [2024-06-10 12:14:51.806354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.806466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.446 [2024-06-10 12:14:51.807335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.446 [2024-06-10 12:14:51.807345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.807983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.807997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.447 [2024-06-10 12:14:51.808177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.447 [2024-06-10 12:14:51.808191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.808977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.808991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.448 [2024-06-10 12:14:51.809270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.448 [2024-06-10 12:14:51.809294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.448 [2024-06-10 12:14:51.809605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.448 [2024-06-10 12:14:51.809615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.809639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.809663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.809687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.809711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.809735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.809758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.809979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.809994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.449 [2024-06-10 12:14:51.810498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.449 [2024-06-10 12:14:51.810571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.449 [2024-06-10 12:14:51.810586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.810980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.810999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.450 [2024-06-10 12:14:51.811870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.450 [2024-06-10 12:14:51.811889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.811899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.811918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.811928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.811947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.811957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.811976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.811986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:14:51.812505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.451 [2024-06-10 12:14:51.812516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.434813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.434853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.434889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.434901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.434917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.434927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.434943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.434953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.434968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.434978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.434995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.435022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.435049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.435074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.435098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.435123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.451 [2024-06-10 12:15:04.435153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.451 [2024-06-10 12:15:04.435163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.435178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.452 [2024-06-10 12:15:04.435188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.435203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.452 [2024-06-10 12:15:04.435213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.452 [2024-06-10 12:15:04.438759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.452 [2024-06-10 12:15:04.438768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.452 Received shutdown signal, test time was about 26.761275 seconds 00:26:17.452 00:26:17.452 Latency(us) 00:26:17.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.452 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:17.452 Verification LBA range: start 0x0 length 0x4000 00:26:17.452 Nvme0n1 : 26.76 10772.16 42.08 0.00 0.00 11861.87 167.94 3073585.97 00:26:17.452 =================================================================================================================== 00:26:17.452 Total : 10772.16 42.08 0.00 0.00 11861.87 167.94 3073585.97 00:26:17.452 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:17.710 rmmod nvme_tcp 00:26:17.710 rmmod nvme_fabrics 00:26:17.710 rmmod nvme_keyring 00:26:17.710 12:15:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2332140 ']' 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2332140 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2332140 ']' 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2332140 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2332140 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2332140' 00:26:17.710 killing process with pid 2332140 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2332140 00:26:17.710 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2332140 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.968 12:15:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.870 12:15:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:19.870 00:26:19.870 real 0m40.752s 00:26:19.870 user 1m42.871s 00:26:19.870 sys 0m14.802s 00:26:19.870 12:15:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:19.870 12:15:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.870 ************************************ 00:26:19.870 END TEST nvmf_host_multipath_status 00:26:19.870 ************************************ 00:26:19.870 12:15:09 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:19.870 12:15:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:19.870 12:15:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:19.870 12:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.129 ************************************ 00:26:20.129 START TEST nvmf_discovery_remove_ifc 00:26:20.129 ************************************ 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:20.129 * Looking for test storage... 00:26:20.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.129 12:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:26.704 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:26.704 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:26.704 Found net devices under 0000:af:00.0: cvl_0_0 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:26.704 Found net devices under 0000:af:00.1: cvl_0_1 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.704 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.705 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.705 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.705 12:15:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:26:26.705 00:26:26.705 --- 10.0.0.2 ping statistics --- 00:26:26.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.705 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:26:26.705 00:26:26.705 --- 10.0.0.1 ping statistics --- 00:26:26.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.705 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.705 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2341699 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2341699 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2341699 ']' 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:26.972 12:15:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.972 [2024-06-10 12:15:16.293889] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:26:26.972 [2024-06-10 12:15:16.293941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.972 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.972 [2024-06-10 12:15:16.368356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.972 [2024-06-10 12:15:16.442460] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.972 [2024-06-10 12:15:16.442500] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.972 [2024-06-10 12:15:16.442510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.972 [2024-06-10 12:15:16.442519] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.972 [2024-06-10 12:15:16.442526] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.972 [2024-06-10 12:15:16.442545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.903 [2024-06-10 12:15:17.145280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.903 [2024-06-10 12:15:17.153432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:27.903 null0 00:26:27.903 [2024-06-10 12:15:17.185452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2341767 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2341767 /tmp/host.sock 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2341767 ']' 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:27.903 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:27.903 12:15:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.903 [2024-06-10 12:15:17.252686] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:26:27.903 [2024-06-10 12:15:17.252731] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341767 ] 00:26:27.903 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.903 [2024-06-10 12:15:17.321957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.903 [2024-06-10 12:15:17.397228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.835 12:15:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.767 [2024-06-10 12:15:19.153250] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:29.767 [2024-06-10 12:15:19.153270] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:29.767 [2024-06-10 12:15:19.153283] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:29.767 [2024-06-10 12:15:19.240547] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:30.025 [2024-06-10 12:15:19.424819] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.025 [2024-06-10 12:15:19.424867] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.025 [2024-06-10 12:15:19.424888] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.025 [2024-06-10 12:15:19.424905] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:30.025 [2024-06-10 12:15:19.424924] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.025 [2024-06-10 12:15:19.432353] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x118a3a0 was disconnected and freed. delete nvme_qpair. 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:30.025 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.282 12:15:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.215 12:15:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.615 12:15:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.547 12:15:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.479 12:15:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.411 [2024-06-10 12:15:24.866089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:35.411 [2024-06-10 12:15:24.866129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.411 [2024-06-10 12:15:24.866142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.411 [2024-06-10 12:15:24.866153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.411 [2024-06-10 12:15:24.866162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.411 [2024-06-10 12:15:24.866171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.411 [2024-06-10 12:15:24.866180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.411 [2024-06-10 12:15:24.866189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.411 [2024-06-10 12:15:24.866198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.411 [2024-06-10 12:15:24.866208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.411 [2024-06-10 12:15:24.866217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.411 [2024-06-10 12:15:24.866225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151510 is same with the state(5) to be set 00:26:35.411 [2024-06-10 12:15:24.876110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1151510 (9): Bad file descriptor 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.411 12:15:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.411 [2024-06-10 12:15:24.886152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:36.473 [2024-06-10 12:15:25.948541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:36.473 [2024-06-10 12:15:25.948584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1151510 with addr=10.0.0.2, port=4420 00:26:36.473 [2024-06-10 12:15:25.948601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151510 is same with the state(5) to be set 00:26:36.473 [2024-06-10 12:15:25.948628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1151510 (9): Bad file descriptor 00:26:36.473 [2024-06-10 12:15:25.948692] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:36.473 [2024-06-10 12:15:25.948712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:36.473 [2024-06-10 12:15:25.948725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:36.474 [2024-06-10 12:15:25.948738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:36.474 [2024-06-10 12:15:25.948758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.474 [2024-06-10 12:15:25.948770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:36.474 12:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.474 12:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.474 12:15:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.845 [2024-06-10 12:15:26.951245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.845 [2024-06-10 12:15:26.951277] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:37.845 [2024-06-10 12:15:26.951314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.845 [2024-06-10 12:15:26.951326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.845 [2024-06-10 12:15:26.951337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.845 [2024-06-10 12:15:26.951346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.845 [2024-06-10 12:15:26.951356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.845 [2024-06-10 12:15:26.951365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.845 [2024-06-10 12:15:26.951375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.845 [2024-06-10 12:15:26.951387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.845 [2024-06-10 12:15:26.951397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.845 [2024-06-10 12:15:26.951406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.845 [2024-06-10 12:15:26.951415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:37.845 [2024-06-10 12:15:26.951768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11509a0 (9): Bad file descriptor 00:26:37.845 [2024-06-10 12:15:26.952778] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:37.845 [2024-06-10 12:15:26.952790] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.845 12:15:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.845 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:37.845 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:37.846 12:15:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.778 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.778 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.778 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.778 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.778 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.778 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.779 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.779 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.779 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:38.779 12:15:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.711 [2024-06-10 12:15:29.008955] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:39.711 [2024-06-10 12:15:29.008973] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:39.711 [2024-06-10 12:15:29.008986] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.711 [2024-06-10 12:15:29.135368] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.711 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.969 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:39.969 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:39.969 12:15:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.969 [2024-06-10 12:15:29.310937] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:39.969 [2024-06-10 12:15:29.310971] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:39.969 [2024-06-10 12:15:29.310989] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:39.969 [2024-06-10 12:15:29.311003] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:39.969 [2024-06-10 12:15:29.311012] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:39.969 [2024-06-10 12:15:29.317810] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1194bf0 was disconnected and freed. delete nvme_qpair. 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2341767 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2341767 ']' 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2341767 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2341767 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2341767' 00:26:40.908 killing process with pid 2341767 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2341767 00:26:40.908 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2341767 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:41.169 rmmod nvme_tcp 00:26:41.169 rmmod nvme_fabrics 00:26:41.169 rmmod nvme_keyring 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2341699 ']' 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2341699 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2341699 ']' 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2341699 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2341699 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2341699' 00:26:41.169 killing process with pid 2341699 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2341699 00:26:41.169 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2341699 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.427 12:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.957 12:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.957 00:26:43.957 real 0m23.493s 00:26:43.957 user 0m28.415s 00:26:43.957 sys 0m7.264s 00:26:43.957 12:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:43.957 12:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.957 ************************************ 00:26:43.957 END TEST nvmf_discovery_remove_ifc 00:26:43.957 ************************************ 00:26:43.957 12:15:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:43.957 12:15:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:43.957 12:15:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:43.957 12:15:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.957 ************************************ 00:26:43.957 START TEST nvmf_identify_kernel_target 00:26:43.957 ************************************ 00:26:43.957 12:15:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:43.957 * Looking for test storage... 00:26:43.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.957 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.958 12:15:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:50.526 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:50.526 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:50.526 Found net devices under 0000:af:00.0: cvl_0_0 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:50.526 Found net devices under 0000:af:00.1: cvl_0_1 00:26:50.526 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:50.527 00:26:50.527 --- 10.0.0.2 ping statistics --- 00:26:50.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.527 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:26:50.527 00:26:50.527 --- 10.0.0.1 ping statistics --- 00:26:50.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.527 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:50.527 12:15:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:53.813 Waiting for block devices as requested 00:26:53.813 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:53.813 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:54.072 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:54.072 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:54.072 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:54.330 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:54.330 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:54.330 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:54.588 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:54.588 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:54.588 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:54.847 No valid GPT data, bailing 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:54.847 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:55.106 00:26:55.106 Discovery Log Number of Records 2, Generation counter 2 00:26:55.106 =====Discovery Log Entry 0====== 00:26:55.106 trtype: tcp 00:26:55.106 adrfam: ipv4 00:26:55.106 subtype: current discovery subsystem 00:26:55.106 treq: not specified, sq flow control disable supported 00:26:55.106 portid: 1 00:26:55.106 trsvcid: 4420 00:26:55.106 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.106 traddr: 10.0.0.1 00:26:55.106 eflags: none 00:26:55.106 sectype: none 00:26:55.106 =====Discovery Log Entry 1====== 00:26:55.106 trtype: tcp 00:26:55.106 adrfam: ipv4 00:26:55.106 subtype: nvme subsystem 00:26:55.106 treq: not specified, sq flow control disable supported 00:26:55.106 portid: 1 00:26:55.106 trsvcid: 4420 00:26:55.106 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:55.106 traddr: 10.0.0.1 00:26:55.106 eflags: none 00:26:55.106 sectype: none 00:26:55.106 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:55.106 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:55.106 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.106 ===================================================== 00:26:55.106 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:55.106 ===================================================== 00:26:55.106 Controller Capabilities/Features 00:26:55.106 ================================ 00:26:55.106 Vendor ID: 0000 00:26:55.106 Subsystem Vendor ID: 0000 00:26:55.106 Serial Number: 23375962888a66748352 00:26:55.106 Model Number: Linux 00:26:55.106 Firmware Version: 6.7.0-68 00:26:55.106 Recommended Arb Burst: 0 00:26:55.106 IEEE OUI Identifier: 00 00 00 00:26:55.106 Multi-path I/O 00:26:55.106 May have multiple subsystem ports: No 00:26:55.106 May have multiple controllers: No 00:26:55.106 Associated with SR-IOV VF: No 00:26:55.106 Max Data Transfer Size: Unlimited 00:26:55.106 Max Number of Namespaces: 0 00:26:55.106 Max Number of I/O Queues: 1024 00:26:55.106 NVMe Specification Version (VS): 1.3 00:26:55.106 NVMe Specification Version (Identify): 1.3 00:26:55.106 Maximum Queue Entries: 1024 00:26:55.106 Contiguous Queues Required: No 00:26:55.106 Arbitration Mechanisms Supported 00:26:55.106 Weighted Round Robin: Not Supported 00:26:55.106 Vendor Specific: Not Supported 00:26:55.106 Reset Timeout: 7500 ms 00:26:55.106 Doorbell Stride: 4 bytes 00:26:55.106 NVM Subsystem Reset: Not Supported 00:26:55.106 Command Sets Supported 00:26:55.106 NVM Command Set: Supported 00:26:55.106 Boot Partition: Not Supported 00:26:55.106 Memory Page Size Minimum: 4096 bytes 00:26:55.106 Memory Page Size Maximum: 4096 bytes 00:26:55.106 Persistent Memory Region: Not Supported 00:26:55.106 Optional Asynchronous Events Supported 00:26:55.106 Namespace Attribute Notices: Not Supported 00:26:55.106 Firmware Activation Notices: Not Supported 00:26:55.106 ANA Change Notices: Not Supported 00:26:55.106 PLE Aggregate Log Change Notices: Not Supported 00:26:55.106 LBA Status Info Alert Notices: Not Supported 00:26:55.106 EGE Aggregate Log Change Notices: Not Supported 00:26:55.106 Normal NVM Subsystem Shutdown event: Not Supported 00:26:55.106 Zone Descriptor Change Notices: Not Supported 00:26:55.106 Discovery Log Change Notices: Supported 00:26:55.106 Controller Attributes 00:26:55.106 128-bit Host Identifier: Not Supported 00:26:55.106 Non-Operational Permissive Mode: Not Supported 00:26:55.106 NVM Sets: Not Supported 00:26:55.106 Read Recovery Levels: Not Supported 00:26:55.106 Endurance Groups: Not Supported 00:26:55.106 Predictable Latency Mode: Not Supported 00:26:55.106 Traffic Based Keep ALive: Not Supported 00:26:55.106 Namespace Granularity: Not Supported 00:26:55.106 SQ Associations: Not Supported 00:26:55.106 UUID List: Not Supported 00:26:55.106 Multi-Domain Subsystem: Not Supported 00:26:55.106 Fixed Capacity Management: Not Supported 00:26:55.106 Variable Capacity Management: Not Supported 00:26:55.106 Delete Endurance Group: Not Supported 00:26:55.106 Delete NVM Set: Not Supported 00:26:55.106 Extended LBA Formats Supported: Not Supported 00:26:55.106 Flexible Data Placement Supported: Not Supported 00:26:55.106 00:26:55.106 Controller Memory Buffer Support 00:26:55.106 ================================ 00:26:55.106 Supported: No 00:26:55.106 00:26:55.106 Persistent Memory Region Support 00:26:55.106 ================================ 00:26:55.106 Supported: No 00:26:55.106 00:26:55.106 Admin Command Set Attributes 00:26:55.106 ============================ 00:26:55.106 Security Send/Receive: Not Supported 00:26:55.106 Format NVM: Not Supported 00:26:55.106 Firmware Activate/Download: Not Supported 00:26:55.106 Namespace Management: Not Supported 00:26:55.106 Device Self-Test: Not Supported 00:26:55.106 Directives: Not Supported 00:26:55.106 NVMe-MI: Not Supported 00:26:55.106 Virtualization Management: Not Supported 00:26:55.106 Doorbell Buffer Config: Not Supported 00:26:55.106 Get LBA Status Capability: Not Supported 00:26:55.106 Command & Feature Lockdown Capability: Not Supported 00:26:55.106 Abort Command Limit: 1 00:26:55.106 Async Event Request Limit: 1 00:26:55.106 Number of Firmware Slots: N/A 00:26:55.106 Firmware Slot 1 Read-Only: N/A 00:26:55.106 Firmware Activation Without Reset: N/A 00:26:55.106 Multiple Update Detection Support: N/A 00:26:55.106 Firmware Update Granularity: No Information Provided 00:26:55.106 Per-Namespace SMART Log: No 00:26:55.106 Asymmetric Namespace Access Log Page: Not Supported 00:26:55.106 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:55.106 Command Effects Log Page: Not Supported 00:26:55.106 Get Log Page Extended Data: Supported 00:26:55.106 Telemetry Log Pages: Not Supported 00:26:55.106 Persistent Event Log Pages: Not Supported 00:26:55.106 Supported Log Pages Log Page: May Support 00:26:55.106 Commands Supported & Effects Log Page: Not Supported 00:26:55.106 Feature Identifiers & Effects Log Page:May Support 00:26:55.106 NVMe-MI Commands & Effects Log Page: May Support 00:26:55.106 Data Area 4 for Telemetry Log: Not Supported 00:26:55.106 Error Log Page Entries Supported: 1 00:26:55.106 Keep Alive: Not Supported 00:26:55.106 00:26:55.106 NVM Command Set Attributes 00:26:55.106 ========================== 00:26:55.106 Submission Queue Entry Size 00:26:55.106 Max: 1 00:26:55.106 Min: 1 00:26:55.106 Completion Queue Entry Size 00:26:55.106 Max: 1 00:26:55.106 Min: 1 00:26:55.106 Number of Namespaces: 0 00:26:55.106 Compare Command: Not Supported 00:26:55.106 Write Uncorrectable Command: Not Supported 00:26:55.106 Dataset Management Command: Not Supported 00:26:55.106 Write Zeroes Command: Not Supported 00:26:55.106 Set Features Save Field: Not Supported 00:26:55.106 Reservations: Not Supported 00:26:55.106 Timestamp: Not Supported 00:26:55.106 Copy: Not Supported 00:26:55.106 Volatile Write Cache: Not Present 00:26:55.106 Atomic Write Unit (Normal): 1 00:26:55.106 Atomic Write Unit (PFail): 1 00:26:55.106 Atomic Compare & Write Unit: 1 00:26:55.106 Fused Compare & Write: Not Supported 00:26:55.106 Scatter-Gather List 00:26:55.106 SGL Command Set: Supported 00:26:55.106 SGL Keyed: Not Supported 00:26:55.106 SGL Bit Bucket Descriptor: Not Supported 00:26:55.106 SGL Metadata Pointer: Not Supported 00:26:55.106 Oversized SGL: Not Supported 00:26:55.106 SGL Metadata Address: Not Supported 00:26:55.106 SGL Offset: Supported 00:26:55.106 Transport SGL Data Block: Not Supported 00:26:55.106 Replay Protected Memory Block: Not Supported 00:26:55.106 00:26:55.106 Firmware Slot Information 00:26:55.106 ========================= 00:26:55.106 Active slot: 0 00:26:55.106 00:26:55.106 00:26:55.106 Error Log 00:26:55.106 ========= 00:26:55.106 00:26:55.106 Active Namespaces 00:26:55.107 ================= 00:26:55.107 Discovery Log Page 00:26:55.107 ================== 00:26:55.107 Generation Counter: 2 00:26:55.107 Number of Records: 2 00:26:55.107 Record Format: 0 00:26:55.107 00:26:55.107 Discovery Log Entry 0 00:26:55.107 ---------------------- 00:26:55.107 Transport Type: 3 (TCP) 00:26:55.107 Address Family: 1 (IPv4) 00:26:55.107 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:55.107 Entry Flags: 00:26:55.107 Duplicate Returned Information: 0 00:26:55.107 Explicit Persistent Connection Support for Discovery: 0 00:26:55.107 Transport Requirements: 00:26:55.107 Secure Channel: Not Specified 00:26:55.107 Port ID: 1 (0x0001) 00:26:55.107 Controller ID: 65535 (0xffff) 00:26:55.107 Admin Max SQ Size: 32 00:26:55.107 Transport Service Identifier: 4420 00:26:55.107 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:55.107 Transport Address: 10.0.0.1 00:26:55.107 Discovery Log Entry 1 00:26:55.107 ---------------------- 00:26:55.107 Transport Type: 3 (TCP) 00:26:55.107 Address Family: 1 (IPv4) 00:26:55.107 Subsystem Type: 2 (NVM Subsystem) 00:26:55.107 Entry Flags: 00:26:55.107 Duplicate Returned Information: 0 00:26:55.107 Explicit Persistent Connection Support for Discovery: 0 00:26:55.107 Transport Requirements: 00:26:55.107 Secure Channel: Not Specified 00:26:55.107 Port ID: 1 (0x0001) 00:26:55.107 Controller ID: 65535 (0xffff) 00:26:55.107 Admin Max SQ Size: 32 00:26:55.107 Transport Service Identifier: 4420 00:26:55.107 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:55.107 Transport Address: 10.0.0.1 00:26:55.107 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:55.107 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.107 get_feature(0x01) failed 00:26:55.107 get_feature(0x02) failed 00:26:55.107 get_feature(0x04) failed 00:26:55.107 ===================================================== 00:26:55.107 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:55.107 ===================================================== 00:26:55.107 Controller Capabilities/Features 00:26:55.107 ================================ 00:26:55.107 Vendor ID: 0000 00:26:55.107 Subsystem Vendor ID: 0000 00:26:55.107 Serial Number: 79a5b08ab77dbbe57158 00:26:55.107 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:55.107 Firmware Version: 6.7.0-68 00:26:55.107 Recommended Arb Burst: 6 00:26:55.107 IEEE OUI Identifier: 00 00 00 00:26:55.107 Multi-path I/O 00:26:55.107 May have multiple subsystem ports: Yes 00:26:55.107 May have multiple controllers: Yes 00:26:55.107 Associated with SR-IOV VF: No 00:26:55.107 Max Data Transfer Size: Unlimited 00:26:55.107 Max Number of Namespaces: 1024 00:26:55.107 Max Number of I/O Queues: 128 00:26:55.107 NVMe Specification Version (VS): 1.3 00:26:55.107 NVMe Specification Version (Identify): 1.3 00:26:55.107 Maximum Queue Entries: 1024 00:26:55.107 Contiguous Queues Required: No 00:26:55.107 Arbitration Mechanisms Supported 00:26:55.107 Weighted Round Robin: Not Supported 00:26:55.107 Vendor Specific: Not Supported 00:26:55.107 Reset Timeout: 7500 ms 00:26:55.107 Doorbell Stride: 4 bytes 00:26:55.107 NVM Subsystem Reset: Not Supported 00:26:55.107 Command Sets Supported 00:26:55.107 NVM Command Set: Supported 00:26:55.107 Boot Partition: Not Supported 00:26:55.107 Memory Page Size Minimum: 4096 bytes 00:26:55.107 Memory Page Size Maximum: 4096 bytes 00:26:55.107 Persistent Memory Region: Not Supported 00:26:55.107 Optional Asynchronous Events Supported 00:26:55.107 Namespace Attribute Notices: Supported 00:26:55.107 Firmware Activation Notices: Not Supported 00:26:55.107 ANA Change Notices: Supported 00:26:55.107 PLE Aggregate Log Change Notices: Not Supported 00:26:55.107 LBA Status Info Alert Notices: Not Supported 00:26:55.107 EGE Aggregate Log Change Notices: Not Supported 00:26:55.107 Normal NVM Subsystem Shutdown event: Not Supported 00:26:55.107 Zone Descriptor Change Notices: Not Supported 00:26:55.107 Discovery Log Change Notices: Not Supported 00:26:55.107 Controller Attributes 00:26:55.107 128-bit Host Identifier: Supported 00:26:55.107 Non-Operational Permissive Mode: Not Supported 00:26:55.107 NVM Sets: Not Supported 00:26:55.107 Read Recovery Levels: Not Supported 00:26:55.107 Endurance Groups: Not Supported 00:26:55.107 Predictable Latency Mode: Not Supported 00:26:55.107 Traffic Based Keep ALive: Supported 00:26:55.107 Namespace Granularity: Not Supported 00:26:55.107 SQ Associations: Not Supported 00:26:55.107 UUID List: Not Supported 00:26:55.107 Multi-Domain Subsystem: Not Supported 00:26:55.107 Fixed Capacity Management: Not Supported 00:26:55.107 Variable Capacity Management: Not Supported 00:26:55.107 Delete Endurance Group: Not Supported 00:26:55.107 Delete NVM Set: Not Supported 00:26:55.107 Extended LBA Formats Supported: Not Supported 00:26:55.107 Flexible Data Placement Supported: Not Supported 00:26:55.107 00:26:55.107 Controller Memory Buffer Support 00:26:55.107 ================================ 00:26:55.107 Supported: No 00:26:55.107 00:26:55.107 Persistent Memory Region Support 00:26:55.107 ================================ 00:26:55.107 Supported: No 00:26:55.107 00:26:55.107 Admin Command Set Attributes 00:26:55.107 ============================ 00:26:55.107 Security Send/Receive: Not Supported 00:26:55.107 Format NVM: Not Supported 00:26:55.107 Firmware Activate/Download: Not Supported 00:26:55.107 Namespace Management: Not Supported 00:26:55.107 Device Self-Test: Not Supported 00:26:55.107 Directives: Not Supported 00:26:55.107 NVMe-MI: Not Supported 00:26:55.107 Virtualization Management: Not Supported 00:26:55.107 Doorbell Buffer Config: Not Supported 00:26:55.107 Get LBA Status Capability: Not Supported 00:26:55.107 Command & Feature Lockdown Capability: Not Supported 00:26:55.107 Abort Command Limit: 4 00:26:55.107 Async Event Request Limit: 4 00:26:55.107 Number of Firmware Slots: N/A 00:26:55.107 Firmware Slot 1 Read-Only: N/A 00:26:55.107 Firmware Activation Without Reset: N/A 00:26:55.107 Multiple Update Detection Support: N/A 00:26:55.107 Firmware Update Granularity: No Information Provided 00:26:55.107 Per-Namespace SMART Log: Yes 00:26:55.107 Asymmetric Namespace Access Log Page: Supported 00:26:55.107 ANA Transition Time : 10 sec 00:26:55.107 00:26:55.107 Asymmetric Namespace Access Capabilities 00:26:55.107 ANA Optimized State : Supported 00:26:55.107 ANA Non-Optimized State : Supported 00:26:55.107 ANA Inaccessible State : Supported 00:26:55.107 ANA Persistent Loss State : Supported 00:26:55.107 ANA Change State : Supported 00:26:55.107 ANAGRPID is not changed : No 00:26:55.107 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:55.107 00:26:55.107 ANA Group Identifier Maximum : 128 00:26:55.107 Number of ANA Group Identifiers : 128 00:26:55.107 Max Number of Allowed Namespaces : 1024 00:26:55.107 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:55.107 Command Effects Log Page: Supported 00:26:55.107 Get Log Page Extended Data: Supported 00:26:55.107 Telemetry Log Pages: Not Supported 00:26:55.107 Persistent Event Log Pages: Not Supported 00:26:55.107 Supported Log Pages Log Page: May Support 00:26:55.107 Commands Supported & Effects Log Page: Not Supported 00:26:55.107 Feature Identifiers & Effects Log Page:May Support 00:26:55.107 NVMe-MI Commands & Effects Log Page: May Support 00:26:55.107 Data Area 4 for Telemetry Log: Not Supported 00:26:55.107 Error Log Page Entries Supported: 128 00:26:55.107 Keep Alive: Supported 00:26:55.107 Keep Alive Granularity: 1000 ms 00:26:55.107 00:26:55.107 NVM Command Set Attributes 00:26:55.107 ========================== 00:26:55.107 Submission Queue Entry Size 00:26:55.107 Max: 64 00:26:55.107 Min: 64 00:26:55.107 Completion Queue Entry Size 00:26:55.107 Max: 16 00:26:55.107 Min: 16 00:26:55.107 Number of Namespaces: 1024 00:26:55.108 Compare Command: Not Supported 00:26:55.108 Write Uncorrectable Command: Not Supported 00:26:55.108 Dataset Management Command: Supported 00:26:55.108 Write Zeroes Command: Supported 00:26:55.108 Set Features Save Field: Not Supported 00:26:55.108 Reservations: Not Supported 00:26:55.108 Timestamp: Not Supported 00:26:55.108 Copy: Not Supported 00:26:55.108 Volatile Write Cache: Present 00:26:55.108 Atomic Write Unit (Normal): 1 00:26:55.108 Atomic Write Unit (PFail): 1 00:26:55.108 Atomic Compare & Write Unit: 1 00:26:55.108 Fused Compare & Write: Not Supported 00:26:55.108 Scatter-Gather List 00:26:55.108 SGL Command Set: Supported 00:26:55.108 SGL Keyed: Not Supported 00:26:55.108 SGL Bit Bucket Descriptor: Not Supported 00:26:55.108 SGL Metadata Pointer: Not Supported 00:26:55.108 Oversized SGL: Not Supported 00:26:55.108 SGL Metadata Address: Not Supported 00:26:55.108 SGL Offset: Supported 00:26:55.108 Transport SGL Data Block: Not Supported 00:26:55.108 Replay Protected Memory Block: Not Supported 00:26:55.108 00:26:55.108 Firmware Slot Information 00:26:55.108 ========================= 00:26:55.108 Active slot: 0 00:26:55.108 00:26:55.108 Asymmetric Namespace Access 00:26:55.108 =========================== 00:26:55.108 Change Count : 0 00:26:55.108 Number of ANA Group Descriptors : 1 00:26:55.108 ANA Group Descriptor : 0 00:26:55.108 ANA Group ID : 1 00:26:55.108 Number of NSID Values : 1 00:26:55.108 Change Count : 0 00:26:55.108 ANA State : 1 00:26:55.108 Namespace Identifier : 1 00:26:55.108 00:26:55.108 Commands Supported and Effects 00:26:55.108 ============================== 00:26:55.108 Admin Commands 00:26:55.108 -------------- 00:26:55.108 Get Log Page (02h): Supported 00:26:55.108 Identify (06h): Supported 00:26:55.108 Abort (08h): Supported 00:26:55.108 Set Features (09h): Supported 00:26:55.108 Get Features (0Ah): Supported 00:26:55.108 Asynchronous Event Request (0Ch): Supported 00:26:55.108 Keep Alive (18h): Supported 00:26:55.108 I/O Commands 00:26:55.108 ------------ 00:26:55.108 Flush (00h): Supported 00:26:55.108 Write (01h): Supported LBA-Change 00:26:55.108 Read (02h): Supported 00:26:55.108 Write Zeroes (08h): Supported LBA-Change 00:26:55.108 Dataset Management (09h): Supported 00:26:55.108 00:26:55.108 Error Log 00:26:55.108 ========= 00:26:55.108 Entry: 0 00:26:55.108 Error Count: 0x3 00:26:55.108 Submission Queue Id: 0x0 00:26:55.108 Command Id: 0x5 00:26:55.108 Phase Bit: 0 00:26:55.108 Status Code: 0x2 00:26:55.108 Status Code Type: 0x0 00:26:55.108 Do Not Retry: 1 00:26:55.108 Error Location: 0x28 00:26:55.108 LBA: 0x0 00:26:55.108 Namespace: 0x0 00:26:55.108 Vendor Log Page: 0x0 00:26:55.108 ----------- 00:26:55.108 Entry: 1 00:26:55.108 Error Count: 0x2 00:26:55.108 Submission Queue Id: 0x0 00:26:55.108 Command Id: 0x5 00:26:55.108 Phase Bit: 0 00:26:55.108 Status Code: 0x2 00:26:55.108 Status Code Type: 0x0 00:26:55.108 Do Not Retry: 1 00:26:55.108 Error Location: 0x28 00:26:55.108 LBA: 0x0 00:26:55.108 Namespace: 0x0 00:26:55.108 Vendor Log Page: 0x0 00:26:55.108 ----------- 00:26:55.108 Entry: 2 00:26:55.108 Error Count: 0x1 00:26:55.108 Submission Queue Id: 0x0 00:26:55.108 Command Id: 0x4 00:26:55.108 Phase Bit: 0 00:26:55.108 Status Code: 0x2 00:26:55.108 Status Code Type: 0x0 00:26:55.108 Do Not Retry: 1 00:26:55.108 Error Location: 0x28 00:26:55.108 LBA: 0x0 00:26:55.108 Namespace: 0x0 00:26:55.108 Vendor Log Page: 0x0 00:26:55.108 00:26:55.108 Number of Queues 00:26:55.108 ================ 00:26:55.108 Number of I/O Submission Queues: 128 00:26:55.108 Number of I/O Completion Queues: 128 00:26:55.108 00:26:55.108 ZNS Specific Controller Data 00:26:55.108 ============================ 00:26:55.108 Zone Append Size Limit: 0 00:26:55.108 00:26:55.108 00:26:55.108 Active Namespaces 00:26:55.108 ================= 00:26:55.108 get_feature(0x05) failed 00:26:55.108 Namespace ID:1 00:26:55.108 Command Set Identifier: NVM (00h) 00:26:55.108 Deallocate: Supported 00:26:55.108 Deallocated/Unwritten Error: Not Supported 00:26:55.108 Deallocated Read Value: Unknown 00:26:55.108 Deallocate in Write Zeroes: Not Supported 00:26:55.108 Deallocated Guard Field: 0xFFFF 00:26:55.108 Flush: Supported 00:26:55.108 Reservation: Not Supported 00:26:55.108 Namespace Sharing Capabilities: Multiple Controllers 00:26:55.108 Size (in LBAs): 3125627568 (1490GiB) 00:26:55.108 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:55.108 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:55.108 UUID: e854a4ea-57be-4cad-977c-85bb1639fda9 00:26:55.108 Thin Provisioning: Not Supported 00:26:55.108 Per-NS Atomic Units: Yes 00:26:55.108 Atomic Boundary Size (Normal): 0 00:26:55.108 Atomic Boundary Size (PFail): 0 00:26:55.108 Atomic Boundary Offset: 0 00:26:55.108 NGUID/EUI64 Never Reused: No 00:26:55.108 ANA group ID: 1 00:26:55.108 Namespace Write Protected: No 00:26:55.108 Number of LBA Formats: 1 00:26:55.108 Current LBA Format: LBA Format #00 00:26:55.108 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:55.108 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:55.108 rmmod nvme_tcp 00:26:55.108 rmmod nvme_fabrics 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.108 12:15:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:57.640 12:15:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:00.169 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:00.169 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:00.428 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:00.428 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:00.428 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:01.803 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:01.803 00:27:01.803 real 0m18.289s 00:27:01.803 user 0m4.106s 00:27:01.803 sys 0m9.689s 00:27:01.803 12:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:01.803 12:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.803 ************************************ 00:27:01.803 END TEST nvmf_identify_kernel_target 00:27:01.803 ************************************ 00:27:01.803 12:15:51 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:01.803 12:15:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:01.803 12:15:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:01.803 12:15:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:02.063 ************************************ 00:27:02.063 START TEST nvmf_auth_host 00:27:02.063 ************************************ 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:02.063 * Looking for test storage... 00:27:02.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.063 12:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.709 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.709 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:08.709 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:08.709 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:08.710 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:08.710 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:08.710 Found net devices under 0000:af:00.0: cvl_0_0 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:08.710 Found net devices under 0000:af:00.1: cvl_0_1 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:08.710 12:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:08.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:08.710 00:27:08.710 --- 10.0.0.2 ping statistics --- 00:27:08.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.710 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:27:08.710 00:27:08.710 --- 10.0.0.1 ping statistics --- 00:27:08.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.710 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2354437 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2354437 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2354437 ']' 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:08.710 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.648 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:09.648 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:09.648 12:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.648 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:09.648 12:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c1b58902ca8bbc1ac385d88ccda10934 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PBL 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c1b58902ca8bbc1ac385d88ccda10934 0 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c1b58902ca8bbc1ac385d88ccda10934 0 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c1b58902ca8bbc1ac385d88ccda10934 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PBL 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PBL 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PBL 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecafed378a3cc73817a59cdacb13de481b04f4d25ac14767fd572490a39f3ecf 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dg7 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecafed378a3cc73817a59cdacb13de481b04f4d25ac14767fd572490a39f3ecf 3 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecafed378a3cc73817a59cdacb13de481b04f4d25ac14767fd572490a39f3ecf 3 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecafed378a3cc73817a59cdacb13de481b04f4d25ac14767fd572490a39f3ecf 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dg7 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dg7 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dg7 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0afbef188c69596391ed2fc3fce3ac09cd63cc72d2273b9f 00:27:09.648 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jwm 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0afbef188c69596391ed2fc3fce3ac09cd63cc72d2273b9f 0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0afbef188c69596391ed2fc3fce3ac09cd63cc72d2273b9f 0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0afbef188c69596391ed2fc3fce3ac09cd63cc72d2273b9f 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jwm 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jwm 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jwm 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c31c350f1fa343c6bf576a1343817e80d6940c36363deca 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qN0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c31c350f1fa343c6bf576a1343817e80d6940c36363deca 2 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c31c350f1fa343c6bf576a1343817e80d6940c36363deca 2 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c31c350f1fa343c6bf576a1343817e80d6940c36363deca 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qN0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qN0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qN0 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ee9f72d797bf117517cdf36282d40bb9 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.raO 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ee9f72d797bf117517cdf36282d40bb9 1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ee9f72d797bf117517cdf36282d40bb9 1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ee9f72d797bf117517cdf36282d40bb9 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.raO 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.raO 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.raO 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=80e5b69a9fefa4bbe783a840e0d7d4a5 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qoe 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 80e5b69a9fefa4bbe783a840e0d7d4a5 1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 80e5b69a9fefa4bbe783a840e0d7d4a5 1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=80e5b69a9fefa4bbe783a840e0d7d4a5 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qoe 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qoe 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Qoe 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e65c754ab734f38c2ce598fe325a7abea72ddee4c4c2dbb 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tyq 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e65c754ab734f38c2ce598fe325a7abea72ddee4c4c2dbb 2 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e65c754ab734f38c2ce598fe325a7abea72ddee4c4c2dbb 2 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e65c754ab734f38c2ce598fe325a7abea72ddee4c4c2dbb 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:09.908 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tyq 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tyq 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Tyq 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d31d75141165d749826248aa3cb35bfd 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:10.167 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GVb 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d31d75141165d749826248aa3cb35bfd 0 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d31d75141165d749826248aa3cb35bfd 0 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d31d75141165d749826248aa3cb35bfd 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GVb 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GVb 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GVb 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7844a9c4df9ae0d57344fb14d0db51fc8eda8d58a1370f93ed0d29e5c8c2c7ec 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1jS 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7844a9c4df9ae0d57344fb14d0db51fc8eda8d58a1370f93ed0d29e5c8c2c7ec 3 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7844a9c4df9ae0d57344fb14d0db51fc8eda8d58a1370f93ed0d29e5c8c2c7ec 3 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7844a9c4df9ae0d57344fb14d0db51fc8eda8d58a1370f93ed0d29e5c8c2c7ec 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1jS 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1jS 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1jS 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2354437 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2354437 ']' 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:10.168 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PBL 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dg7 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dg7 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jwm 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qN0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qN0 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.raO 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Qoe ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qoe 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Tyq 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GVb ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GVb 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1jS 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:10.428 12:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:13.717 Waiting for block devices as requested 00:27:13.717 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:13.717 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:13.717 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:13.717 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:13.977 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:13.977 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:13.977 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:13.977 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:14.236 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:14.236 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:14.236 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:14.494 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:14.494 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:14.494 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:14.753 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:14.753 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:14.753 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:15.690 No valid GPT data, bailing 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:15.690 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:15.691 00:27:15.691 Discovery Log Number of Records 2, Generation counter 2 00:27:15.691 =====Discovery Log Entry 0====== 00:27:15.691 trtype: tcp 00:27:15.691 adrfam: ipv4 00:27:15.691 subtype: current discovery subsystem 00:27:15.691 treq: not specified, sq flow control disable supported 00:27:15.691 portid: 1 00:27:15.691 trsvcid: 4420 00:27:15.691 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:15.691 traddr: 10.0.0.1 00:27:15.691 eflags: none 00:27:15.691 sectype: none 00:27:15.691 =====Discovery Log Entry 1====== 00:27:15.691 trtype: tcp 00:27:15.691 adrfam: ipv4 00:27:15.691 subtype: nvme subsystem 00:27:15.691 treq: not specified, sq flow control disable supported 00:27:15.691 portid: 1 00:27:15.691 trsvcid: 4420 00:27:15.691 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:15.691 traddr: 10.0.0.1 00:27:15.691 eflags: none 00:27:15.691 sectype: none 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.691 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.950 nvme0n1 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.950 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.951 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.210 nvme0n1 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.210 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.211 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.471 nvme0n1 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.471 12:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.731 nvme0n1 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.731 nvme0n1 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.731 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.991 nvme0n1 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.991 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.251 nvme0n1 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.251 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.252 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.510 nvme0n1 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:17.510 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.511 12:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.511 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.769 nvme0n1 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.769 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.770 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 nvme0n1 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.029 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.289 nvme0n1 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.289 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.548 nvme0n1 00:27:18.548 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.548 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.548 12:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.548 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.548 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.548 12:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.548 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.549 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.549 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.807 nvme0n1 00:27:18.807 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.807 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.807 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.807 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.807 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.807 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.066 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.067 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.326 nvme0n1 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.326 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.586 nvme0n1 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.586 12:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.846 nvme0n1 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.846 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.415 nvme0n1 00:27:20.415 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.415 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.415 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.415 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.415 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.415 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.416 12:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.675 nvme0n1 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.675 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 nvme0n1 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.244 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.503 nvme0n1 00:27:21.503 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.503 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.503 12:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.503 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.503 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.503 12:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.503 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.503 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.503 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.503 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.763 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.023 nvme0n1 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.023 12:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.591 nvme0n1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.591 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.159 nvme0n1 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.159 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.418 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.419 12:16:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.987 nvme0n1 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.987 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.988 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.556 nvme0n1 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.556 12:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.557 12:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.557 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.557 12:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.125 nvme0n1 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.125 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.126 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.385 nvme0n1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.385 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 nvme0n1 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.644 12:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.644 nvme0n1 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.644 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.904 nvme0n1 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.904 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 nvme0n1 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.164 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.424 nvme0n1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.424 12:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.683 nvme0n1 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.684 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.943 nvme0n1 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.943 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.944 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.203 nvme0n1 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.203 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.463 nvme0n1 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.463 12:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.723 nvme0n1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.723 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 nvme0n1 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.986 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.987 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.344 nvme0n1 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.344 12:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.603 nvme0n1 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:28.603 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.604 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.862 nvme0n1 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:28.862 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.863 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.121 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.380 nvme0n1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.380 12:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.945 nvme0n1 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.945 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.203 nvme0n1 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:30.203 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.204 12:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.772 nvme0n1 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.772 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.031 nvme0n1 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.031 12:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.290 12:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.290 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.290 12:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 nvme0n1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.857 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.424 nvme0n1 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.424 12:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.992 nvme0n1 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.992 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.559 nvme0n1 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.559 12:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.559 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.560 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.128 nvme0n1 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.128 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.387 nvme0n1 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.387 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.388 12:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.388 12:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.388 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.388 12:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.646 nvme0n1 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.646 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.905 nvme0n1 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.905 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.906 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 nvme0n1 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 nvme0n1 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.165 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.424 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.425 nvme0n1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.425 12:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.684 nvme0n1 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.684 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.685 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.944 nvme0n1 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.944 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.203 nvme0n1 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.203 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.462 nvme0n1 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.462 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.463 12:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.722 nvme0n1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.722 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.980 nvme0n1 00:27:36.980 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.980 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.980 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.980 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.980 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.980 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.238 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.496 nvme0n1 00:27:37.496 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.497 12:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.755 nvme0n1 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.755 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.020 nvme0n1 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.020 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.283 nvme0n1 00:27:38.283 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.283 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.283 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.283 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.283 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.543 12:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.803 nvme0n1 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.803 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 nvme0n1 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.374 12:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.634 nvme0n1 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.634 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.205 nvme0n1 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFiNTg5MDJjYThiYmMxYWMzODVkODhjY2RhMTA5MzTZ/qdw: 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWNhZmVkMzc4YTNjYzczODE3YTU5Y2RhY2IxM2RlNDgxYjA0ZjRkMjVhYzE0NzY3ZmQ1NzI0OTBhMzlmM2VjZkzVLO8=: 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.205 12:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 nvme0n1 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.774 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.775 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.343 nvme0n1 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.343 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWU5ZjcyZDc5N2JmMTE3NTE3Y2RmMzYyODJkNDBiYjnvumZa: 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: ]] 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODBlNWI2OWE5ZmVmYTRiYmU3ODNhODQwZTBkN2Q0YTXBtFV5: 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.344 12:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.911 nvme0n1 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2U2NWM3NTRhYjczNGYzOGMyY2U1OThmZTMyNWE3YWJlYTcyZGRlZTRjNGMyZGJir3VtsA==: 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDMxZDc1MTQxMTY1ZDc0OTgyNjI0OGFhM2NiMzViZmQ0o2zz: 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.911 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.912 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.912 12:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.912 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.912 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.912 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.477 nvme0n1 00:27:42.477 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.477 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.477 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.477 12:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.477 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.477 12:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg0NGE5YzRkZjlhZTBkNTczNDRmYjE0ZDBkYjUxZmM4ZWRhOGQ1OGExMzcwZjkzZWQwZDI5ZTVjOGMyYzdlY9pvfig=: 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.750 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.167 nvme0n1 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYmVmMTg4YzY5NTk2MzkxZWQyZmMzZmNlM2FjMDljZDYzY2M3MmQyMjczYjlm1FpGdA==: 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMzMWMzNTBmMWZhMzQzYzZiZjU3NmExMzQzODE3ZTgwZDY5NDBjMzYzNjNkZWNhzBxpKw==: 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.167 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.425 request: 00:27:43.425 { 00:27:43.425 "name": "nvme0", 00:27:43.425 "trtype": "tcp", 00:27:43.425 "traddr": "10.0.0.1", 00:27:43.425 "adrfam": "ipv4", 00:27:43.425 "trsvcid": "4420", 00:27:43.425 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:43.425 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:43.425 "prchk_reftag": false, 00:27:43.425 "prchk_guard": false, 00:27:43.425 "hdgst": false, 00:27:43.425 "ddgst": false, 00:27:43.425 "method": "bdev_nvme_attach_controller", 00:27:43.425 "req_id": 1 00:27:43.425 } 00:27:43.425 Got JSON-RPC error response 00:27:43.425 response: 00:27:43.425 { 00:27:43.425 "code": -5, 00:27:43.425 "message": "Input/output error" 00:27:43.425 } 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.425 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.426 request: 00:27:43.426 { 00:27:43.426 "name": "nvme0", 00:27:43.426 "trtype": "tcp", 00:27:43.426 "traddr": "10.0.0.1", 00:27:43.426 "adrfam": "ipv4", 00:27:43.426 "trsvcid": "4420", 00:27:43.426 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:43.426 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:43.426 "prchk_reftag": false, 00:27:43.426 "prchk_guard": false, 00:27:43.426 "hdgst": false, 00:27:43.426 "ddgst": false, 00:27:43.426 "dhchap_key": "key2", 00:27:43.426 "method": "bdev_nvme_attach_controller", 00:27:43.426 "req_id": 1 00:27:43.426 } 00:27:43.426 Got JSON-RPC error response 00:27:43.426 response: 00:27:43.426 { 00:27:43.426 "code": -5, 00:27:43.426 "message": "Input/output error" 00:27:43.426 } 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.426 request: 00:27:43.426 { 00:27:43.426 "name": "nvme0", 00:27:43.426 "trtype": "tcp", 00:27:43.426 "traddr": "10.0.0.1", 00:27:43.426 "adrfam": "ipv4", 00:27:43.426 "trsvcid": "4420", 00:27:43.426 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:43.426 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:43.426 "prchk_reftag": false, 00:27:43.426 "prchk_guard": false, 00:27:43.426 "hdgst": false, 00:27:43.426 "ddgst": false, 00:27:43.426 "dhchap_key": "key1", 00:27:43.426 "dhchap_ctrlr_key": "ckey2", 00:27:43.426 "method": "bdev_nvme_attach_controller", 00:27:43.426 "req_id": 1 00:27:43.426 } 00:27:43.426 Got JSON-RPC error response 00:27:43.426 response: 00:27:43.426 { 00:27:43.426 "code": -5, 00:27:43.426 "message": "Input/output error" 00:27:43.426 } 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.426 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:43.684 rmmod nvme_tcp 00:27:43.684 rmmod nvme_fabrics 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2354437 ']' 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2354437 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 2354437 ']' 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 2354437 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:43.684 12:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2354437 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2354437' 00:27:43.684 killing process with pid 2354437 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 2354437 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 2354437 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.684 12:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.943 12:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.943 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.943 12:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:45.849 12:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:49.134 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:49.134 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:50.512 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:50.512 12:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PBL /tmp/spdk.key-null.jwm /tmp/spdk.key-sha256.raO /tmp/spdk.key-sha384.Tyq /tmp/spdk.key-sha512.1jS /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:50.512 12:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.889 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:53.890 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:53.890 00:27:53.890 real 0m51.751s 00:27:53.890 user 0m43.814s 00:27:53.890 sys 0m14.303s 00:27:53.890 12:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:53.890 12:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.890 ************************************ 00:27:53.890 END TEST nvmf_auth_host 00:27:53.890 ************************************ 00:27:53.890 12:16:43 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:53.890 12:16:43 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:53.890 12:16:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:53.890 12:16:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:53.890 12:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.890 ************************************ 00:27:53.890 START TEST nvmf_digest 00:27:53.890 ************************************ 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:53.890 * Looking for test storage... 00:27:53.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.890 12:16:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:00.458 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:00.458 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:00.458 Found net devices under 0000:af:00.0: cvl_0_0 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:00.458 Found net devices under 0000:af:00.1: cvl_0_1 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.458 12:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:28:00.717 00:28:00.717 --- 10.0.0.2 ping statistics --- 00:28:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.717 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:00.717 00:28:00.717 --- 10.0.0.1 ping statistics --- 00:28:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.717 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.717 ************************************ 00:28:00.717 START TEST nvmf_digest_clean 00:28:00.717 ************************************ 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2368170 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2368170 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2368170 ']' 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:00.717 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.717 [2024-06-10 12:16:50.167740] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:00.717 [2024-06-10 12:16:50.167785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.717 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.976 [2024-06-10 12:16:50.243426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.976 [2024-06-10 12:16:50.316362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.976 [2024-06-10 12:16:50.316397] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.976 [2024-06-10 12:16:50.316407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.976 [2024-06-10 12:16:50.316416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.976 [2024-06-10 12:16:50.316423] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.976 [2024-06-10 12:16:50.316448] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.545 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:01.545 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:01.545 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.545 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:01.545 12:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.545 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.545 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:01.545 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:01.545 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:01.545 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.545 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.805 null0 00:28:01.805 [2024-06-10 12:16:51.103758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.805 [2024-06-10 12:16:51.127968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2368209 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2368209 /var/tmp/bperf.sock 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2368209 ']' 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:01.805 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.805 [2024-06-10 12:16:51.181076] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:01.805 [2024-06-10 12:16:51.181123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368209 ] 00:28:01.805 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.805 [2024-06-10 12:16:51.251903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.064 [2024-06-10 12:16:51.327383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.631 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:02.631 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:02.631 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.631 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.631 12:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.889 12:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.889 12:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.148 nvme0n1 00:28:03.148 12:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:03.148 12:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.148 Running I/O for 2 seconds... 00:28:05.052 00:28:05.052 Latency(us) 00:28:05.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.052 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:05.052 nvme0n1 : 2.00 25819.76 100.86 0.00 0.00 4952.48 2424.83 11691.62 00:28:05.052 =================================================================================================================== 00:28:05.052 Total : 25819.76 100.86 0.00 0.00 4952.48 2424.83 11691.62 00:28:05.052 0 00:28:05.052 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.052 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.052 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.052 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.052 | select(.opcode=="crc32c") 00:28:05.052 | "\(.module_name) \(.executed)"' 00:28:05.052 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2368209 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2368209 ']' 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2368209 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2368209 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2368209' 00:28:05.311 killing process with pid 2368209 00:28:05.311 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2368209 00:28:05.311 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.312 00:28:05.312 Latency(us) 00:28:05.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.312 =================================================================================================================== 00:28:05.312 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.312 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2368209 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2369000 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2369000 /var/tmp/bperf.sock 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2369000 ']' 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:05.571 12:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.571 [2024-06-10 12:16:55.037786] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:05.571 [2024-06-10 12:16:55.037841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369000 ] 00:28:05.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.571 Zero copy mechanism will not be used. 00:28:05.571 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.830 [2024-06-10 12:16:55.108607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.830 [2024-06-10 12:16:55.183219] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.399 12:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:06.399 12:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:06.399 12:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.399 12:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.399 12:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.658 12:16:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.658 12:16:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.916 nvme0n1 00:28:06.916 12:16:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.916 12:16:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.174 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.174 Zero copy mechanism will not be used. 00:28:07.174 Running I/O for 2 seconds... 00:28:09.076 00:28:09.076 Latency(us) 00:28:09.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.076 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:09.076 nvme0n1 : 2.00 5856.63 732.08 0.00 0.00 2729.55 878.18 9804.19 00:28:09.076 =================================================================================================================== 00:28:09.076 Total : 5856.63 732.08 0.00 0.00 2729.55 878.18 9804.19 00:28:09.076 0 00:28:09.076 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.076 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.076 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.076 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:09.076 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.076 | select(.opcode=="crc32c") 00:28:09.076 | "\(.module_name) \(.executed)"' 00:28:09.335 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2369000 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2369000 ']' 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2369000 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2369000 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2369000' 00:28:09.336 killing process with pid 2369000 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2369000 00:28:09.336 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.336 00:28:09.336 Latency(us) 00:28:09.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.336 =================================================================================================================== 00:28:09.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.336 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2369000 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2369557 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2369557 /var/tmp/bperf.sock 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2369557 ']' 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:09.595 12:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.595 [2024-06-10 12:16:58.975214] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:09.595 [2024-06-10 12:16:58.975267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369557 ] 00:28:09.595 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.595 [2024-06-10 12:16:59.045077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.854 [2024-06-10 12:16:59.120102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.423 12:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:10.423 12:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:10.423 12:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.423 12:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.423 12:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:10.681 12:17:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.681 12:17:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.940 nvme0n1 00:28:10.940 12:17:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:10.940 12:17:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.198 Running I/O for 2 seconds... 00:28:13.098 00:28:13.098 Latency(us) 00:28:13.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.098 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:13.098 nvme0n1 : 2.00 28402.70 110.95 0.00 0.00 4498.75 3643.80 8074.04 00:28:13.098 =================================================================================================================== 00:28:13.098 Total : 28402.70 110.95 0.00 0.00 4498.75 3643.80 8074.04 00:28:13.098 0 00:28:13.098 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.098 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.098 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.098 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.098 | select(.opcode=="crc32c") 00:28:13.098 | "\(.module_name) \(.executed)"' 00:28:13.099 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2369557 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2369557 ']' 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2369557 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2369557 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2369557' 00:28:13.357 killing process with pid 2369557 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2369557 00:28:13.357 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.357 00:28:13.357 Latency(us) 00:28:13.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.357 =================================================================================================================== 00:28:13.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.357 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2369557 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2370285 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2370285 /var/tmp/bperf.sock 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2370285 ']' 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:13.616 12:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.616 [2024-06-10 12:17:02.996203] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:13.616 [2024-06-10 12:17:02.996261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370285 ] 00:28:13.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.616 Zero copy mechanism will not be used. 00:28:13.616 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.616 [2024-06-10 12:17:03.065327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.876 [2024-06-10 12:17:03.139656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.445 12:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:14.445 12:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:14.445 12:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:14.445 12:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:14.445 12:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:14.709 12:17:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.709 12:17:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.969 nvme0n1 00:28:14.969 12:17:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.969 12:17:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.969 Zero copy mechanism will not be used. 00:28:14.969 Running I/O for 2 seconds... 00:28:16.876 00:28:16.876 Latency(us) 00:28:16.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.876 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:16.876 nvme0n1 : 2.00 5810.89 726.36 0.00 0.00 2748.26 2044.72 9279.90 00:28:16.876 =================================================================================================================== 00:28:16.876 Total : 5810.89 726.36 0.00 0.00 2748.26 2044.72 9279.90 00:28:16.876 0 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:17.136 | select(.opcode=="crc32c") 00:28:17.136 | "\(.module_name) \(.executed)"' 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2370285 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2370285 ']' 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2370285 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370285 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370285' 00:28:17.136 killing process with pid 2370285 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2370285 00:28:17.136 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.136 00:28:17.136 Latency(us) 00:28:17.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.136 =================================================================================================================== 00:28:17.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.136 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2370285 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2368170 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2368170 ']' 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2368170 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2368170 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2368170' 00:28:17.395 killing process with pid 2368170 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2368170 00:28:17.395 12:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2368170 00:28:17.656 00:28:17.656 real 0m16.942s 00:28:17.656 user 0m31.695s 00:28:17.656 sys 0m5.356s 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:17.656 ************************************ 00:28:17.656 END TEST nvmf_digest_clean 00:28:17.656 ************************************ 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.656 ************************************ 00:28:17.656 START TEST nvmf_digest_error 00:28:17.656 ************************************ 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2370935 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2370935 00:28:17.656 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:17.657 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2370935 ']' 00:28:17.657 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.657 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:17.657 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.657 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:17.657 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.916 [2024-06-10 12:17:07.191786] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:17.916 [2024-06-10 12:17:07.191830] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.916 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.916 [2024-06-10 12:17:07.265976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.916 [2024-06-10 12:17:07.338009] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.916 [2024-06-10 12:17:07.338046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.916 [2024-06-10 12:17:07.338056] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.916 [2024-06-10 12:17:07.338064] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.916 [2024-06-10 12:17:07.338071] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.916 [2024-06-10 12:17:07.338091] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.482 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:18.482 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:18.482 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.482 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:18.482 12:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.742 [2024-06-10 12:17:08.036142] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.742 null0 00:28:18.742 [2024-06-10 12:17:08.129887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.742 [2024-06-10 12:17:08.154105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2371216 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2371216 /var/tmp/bperf.sock 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2371216 ']' 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:18.742 12:17:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.742 [2024-06-10 12:17:08.207860] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:18.742 [2024-06-10 12:17:08.207917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371216 ] 00:28:18.742 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.040 [2024-06-10 12:17:08.275903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.040 [2024-06-10 12:17:08.354851] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.653 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:19.653 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:19.653 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.653 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.912 nvme0n1 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.912 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.172 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:20.172 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:20.172 12:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.172 Running I/O for 2 seconds... 00:28:20.172 [2024-06-10 12:17:09.536689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.172 [2024-06-10 12:17:09.536726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.172 [2024-06-10 12:17:09.536738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.172 [2024-06-10 12:17:09.546432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.172 [2024-06-10 12:17:09.546459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.172 [2024-06-10 12:17:09.546470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.172 [2024-06-10 12:17:09.554765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.172 [2024-06-10 12:17:09.554789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.172 [2024-06-10 12:17:09.554801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.172 [2024-06-10 12:17:09.565240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.172 [2024-06-10 12:17:09.565263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.172 [2024-06-10 12:17:09.565274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.172 [2024-06-10 12:17:09.574725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.574748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.574760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.582643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.582665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.582675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.594164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.594186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.602277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.602299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.602309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.612802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.612824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.612835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.624305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.624327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.624337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.635895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.635917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.635927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.648215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.648235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.648246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.659723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.659745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.659755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.669647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.669669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.669679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.677519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.677540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.677550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.173 [2024-06-10 12:17:09.689235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.173 [2024-06-10 12:17:09.689260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.173 [2024-06-10 12:17:09.689271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.700287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.700308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.700318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.708600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.708622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.708632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.718127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.718148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.718159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.726983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.727004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.727015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.734737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.734758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.734769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.743732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.743754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.753954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.753976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.753987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.761745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.761767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.761777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.771391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.771412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.771422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.779300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.779321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.779332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.789945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.789967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.789977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.800401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.800423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.800434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.809594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.809629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.809640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.817596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.817629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.828049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.433 [2024-06-10 12:17:09.828070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.433 [2024-06-10 12:17:09.828081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.433 [2024-06-10 12:17:09.837513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.837545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.844713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.844734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.844748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.855712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.855735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.855745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.867249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.867270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.867281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.878804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.878826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.878836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.886836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.886857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.886867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.896773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.896794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.896804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.908293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.908314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.908325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.919105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.919126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.919137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.927874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.927896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.927906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.939491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.939525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.434 [2024-06-10 12:17:09.951643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.434 [2024-06-10 12:17:09.951664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.434 [2024-06-10 12:17:09.951675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:09.963043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:09.963065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:09.963075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:09.972498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:09.972520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:09.972531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:09.980590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:09.980611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:09.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:09.992237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:09.992259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:09.992270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.004203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.004225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.004236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.016140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.016162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.016173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.026618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.026640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.026651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.035682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.035704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.035714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.047807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.047832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.047844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.060279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.060303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.060314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.072067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.693 [2024-06-10 12:17:10.072089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.693 [2024-06-10 12:17:10.072100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.693 [2024-06-10 12:17:10.080404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.080426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.080437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.091584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.091606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.091617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.103527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.103548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.103559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.114937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.114959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.114970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.126580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.126603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.126617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.136762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.136783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.136794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.145666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.145687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.145698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.156941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.156963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.156973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.168541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.168564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.168574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.180215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.180249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.191974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.191997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.192007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.203896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.203918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.203928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.694 [2024-06-10 12:17:10.211184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.694 [2024-06-10 12:17:10.211206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.694 [2024-06-10 12:17:10.211217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.220854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.220877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.220888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.231501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.231523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.231534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.241041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.241063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.241074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.249087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.249110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.249121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.260467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.260495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.260505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.268064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.268086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.268096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.276942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.276964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.276974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.286795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.286818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.286829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.296691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.296713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.296727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.304422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.304443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.304454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.314637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.314658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.314669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.324970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.324991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.325002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.332676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.332697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.332707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.344165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.344187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.954 [2024-06-10 12:17:10.344198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.954 [2024-06-10 12:17:10.354669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.954 [2024-06-10 12:17:10.354692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.354702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.362534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.362555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.362565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.372122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.372144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.372154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.380561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.380585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.380595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.390877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.390898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.390909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.398904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.398926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.398937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.409737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.409759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.409769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.420052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.420073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.420084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.428835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.428856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.428866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.438671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.438692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.438703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.447490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.447511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.447522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.455949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.455970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.455981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.955 [2024-06-10 12:17:10.464295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:20.955 [2024-06-10 12:17:10.464316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.955 [2024-06-10 12:17:10.464326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.473896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.473918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.473928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.484315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.484335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.492108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.492128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.492138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.504104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.504124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.504134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.514789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.514810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.514821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.522830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.522851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.522861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.534076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.534098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.534108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.543713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.543734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.543748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.551926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.551948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.551958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.562118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.562139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.216 [2024-06-10 12:17:10.570233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.216 [2024-06-10 12:17:10.570254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.216 [2024-06-10 12:17:10.570264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.580918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.580940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.580950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.590024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.590045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.590056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.598628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.598649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.598659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.608468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.608495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.608505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.616815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.616838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.616848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.624944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.625003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.625014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.633745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.633766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.633777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.643586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.643609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.643620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.651866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.651888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.651899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.661189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.661213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.661223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.670381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.670404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.670414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.677998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.678019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.687663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.687686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.687697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.697034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.697057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.697068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.704979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.705001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.705012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.715126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.715147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.715157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.723635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.723656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.723667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.217 [2024-06-10 12:17:10.731898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.217 [2024-06-10 12:17:10.731919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.217 [2024-06-10 12:17:10.731930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.477 [2024-06-10 12:17:10.742457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.477 [2024-06-10 12:17:10.742485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.477 [2024-06-10 12:17:10.742496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.477 [2024-06-10 12:17:10.751380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.477 [2024-06-10 12:17:10.751403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.477 [2024-06-10 12:17:10.751414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.477 [2024-06-10 12:17:10.759649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.477 [2024-06-10 12:17:10.759671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.477 [2024-06-10 12:17:10.759682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.768380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.768402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.768413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.778219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.778240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.778254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.789850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.789872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.789883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.799219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.799240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.799252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.807455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.807483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.807495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.818355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.818377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.818387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.826374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.826395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.826405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.837271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.837293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.837303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.849381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.849403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.849413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.859072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.859094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.859105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.866934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.866960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.877657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.877679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.877689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.886984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.887005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.887016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.895451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.895473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.895490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.904235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.904257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.904268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.913673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.913696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.913707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.921975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.921996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.922006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.931962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.931984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.931995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.939520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.939542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.939552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.950009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.950031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.950041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.960495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.960528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.971762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.971784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.971795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.982350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.478 [2024-06-10 12:17:10.982372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.478 [2024-06-10 12:17:10.982382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.478 [2024-06-10 12:17:10.990664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.479 [2024-06-10 12:17:10.990686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.479 [2024-06-10 12:17:10.990696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.738 [2024-06-10 12:17:11.001305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.738 [2024-06-10 12:17:11.001328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.738 [2024-06-10 12:17:11.001338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.738 [2024-06-10 12:17:11.009361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.738 [2024-06-10 12:17:11.009383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.738 [2024-06-10 12:17:11.009393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.738 [2024-06-10 12:17:11.019869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.019892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.019902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.029691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.029713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.029726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.037832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.037855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.037866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.047521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.047543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.047553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.056275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.056297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.056308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.065304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.065327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.065338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.076512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.076535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.076545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.086765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.086788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.086798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.094300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.094321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.094331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.106897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.106919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.106930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.116830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.116852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.116863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.124400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.124423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.124434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.134935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.134958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.134968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.146045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.146068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.146078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.156201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.156222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.156232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.164786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.164808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.164818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.174978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.175001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.185294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.185316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.185327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.192951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.192976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.192991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.204198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.204221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.204231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.212110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.212132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.221846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.221868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.221879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.230673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.230696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.230706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.239978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.240000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.240010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.739 [2024-06-10 12:17:11.248486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.739 [2024-06-10 12:17:11.248507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.739 [2024-06-10 12:17:11.248518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.258330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.258352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.258362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.266984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.267016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.277838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.277863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.277874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.286423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.286444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.286454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.294484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.294505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.294516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.305052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.305074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.305084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.315175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.315196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.999 [2024-06-10 12:17:11.315206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.999 [2024-06-10 12:17:11.324086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:21.999 [2024-06-10 12:17:11.324107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.324117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.333709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.333731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.333741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.342602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.342624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.342635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.352688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.352709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.352720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.360971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.360994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.361004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.373214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.373237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.373247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.384534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.384556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.384567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.394405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.394427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.394438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.402414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.402436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.402447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.412274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.412294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.412304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.420498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.420519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.420530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.429348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.429370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.429380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.438275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.438297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.438312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.446556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.446577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.446588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.456415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.456436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.456447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.464545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.464567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.464577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.473541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.473562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.473573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.482584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.482606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.482616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.491220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.491242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.491252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.502233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.502255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.502265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.000 [2024-06-10 12:17:11.510330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.000 [2024-06-10 12:17:11.510351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.000 [2024-06-10 12:17:11.510361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.260 [2024-06-10 12:17:11.520863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.260 [2024-06-10 12:17:11.520888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.260 [2024-06-10 12:17:11.520899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.260 [2024-06-10 12:17:11.528632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf68a80) 00:28:22.260 [2024-06-10 12:17:11.528654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.260 [2024-06-10 12:17:11.528664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.260 00:28:22.260 Latency(us) 00:28:22.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.260 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:22.260 nvme0n1 : 2.00 26362.44 102.98 0.00 0.00 4849.82 2097.15 16462.64 00:28:22.260 =================================================================================================================== 00:28:22.260 Total : 26362.44 102.98 0.00 0.00 4849.82 2097.15 16462.64 00:28:22.260 0 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:22.260 | .driver_specific 00:28:22.260 | .nvme_error 00:28:22.260 | .status_code 00:28:22.260 | .command_transient_transport_error' 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 207 > 0 )) 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2371216 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2371216 ']' 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2371216 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:22.260 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2371216 00:28:22.519 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:22.519 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2371216' 00:28:22.520 killing process with pid 2371216 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2371216 00:28:22.520 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.520 00:28:22.520 Latency(us) 00:28:22.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.520 =================================================================================================================== 00:28:22.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2371216 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2371763 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2371763 /var/tmp/bperf.sock 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2371763 ']' 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:22.520 12:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.520 [2024-06-10 12:17:12.015459] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:22.520 [2024-06-10 12:17:12.015513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371763 ] 00:28:22.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.520 Zero copy mechanism will not be used. 00:28:22.781 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.781 [2024-06-10 12:17:12.083784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.781 [2024-06-10 12:17:12.147123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.348 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:23.348 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:23.348 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:23.348 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:23.608 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:23.608 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.608 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.608 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.608 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.608 12:17:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.868 nvme0n1 00:28:23.868 12:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:23.868 12:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.868 12:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.868 12:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.868 12:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:23.868 12:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.868 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.868 Zero copy mechanism will not be used. 00:28:23.868 Running I/O for 2 seconds... 00:28:23.868 [2024-06-10 12:17:13.321696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.321732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.321745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.329732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.329771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.338432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.338456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.338467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.346237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.346260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.346271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.355216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.355240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.355252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.364200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.364223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.364234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.372646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.372670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.372681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.868 [2024-06-10 12:17:13.381654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:23.868 [2024-06-10 12:17:13.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.868 [2024-06-10 12:17:13.381692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.390948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.390972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.390984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.399424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.399446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.399457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.408137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.408159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.408170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.417006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.417028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.417039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.425784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.425806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.425817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.434389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.434412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.434422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.442604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.442625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.442635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.449014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.449036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.449046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.455453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.455484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.455495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.461661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.461684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.461694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.468831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.468854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.468865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.476208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.476230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.476241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.482768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.482791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.482802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.489173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.489194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.489206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.494511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.494533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.494544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.500190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.500213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.500223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.506227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.506251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.506262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.130 [2024-06-10 12:17:13.512426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.130 [2024-06-10 12:17:13.512450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.130 [2024-06-10 12:17:13.512461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.518763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.518784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.518795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.524244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.524267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.524278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.530829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.530851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.530862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.536815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.536838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.536849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.542823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.542846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.542857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.548652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.548675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.548685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.554227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.554249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.559851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.559873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.559887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.565387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.565410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.565421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.570774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.570797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.576101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.576123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.576133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.581522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.581544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.581555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.586777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.586799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.586810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.592067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.592089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.592100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.597367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.597390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.597400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.602661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.602683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.602694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.607926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.607948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.607958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.613183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.613205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.613216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.618267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.618299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.623437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.623458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.623468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.628605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.628628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.628638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.633754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.633776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.633786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.638892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.638915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.638925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.131 [2024-06-10 12:17:13.643985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.131 [2024-06-10 12:17:13.644008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.131 [2024-06-10 12:17:13.644018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.649144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.649167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.649180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.654264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.654287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.654298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.659366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.659389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.659400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.664493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.664515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.664525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.669684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.669717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.674873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.674896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.674906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.680041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.680063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.680074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.685184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.685206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.685217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.690441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.690464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.690475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.695595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.695620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.695630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.700700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.700722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.700733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.705790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.705812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.705822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.710967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.710989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.716097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.393 [2024-06-10 12:17:13.716118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.393 [2024-06-10 12:17:13.716129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.393 [2024-06-10 12:17:13.721259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.721282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.721293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.726424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.726447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.726458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.731587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.731609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.731619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.736744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.736767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.736778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.741889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.741912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.741923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.747008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.747031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.747042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.752079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.752102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.752112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.757180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.757202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.757213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.762277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.762299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.762310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.767373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.767395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.767405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.772546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.772567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.772577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.777695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.777716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.782825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.782846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.782859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.788059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.788082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.788093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.793346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.793379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.798413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.798436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.798446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.803522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.803543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.803553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.808633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.808655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.808666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.814218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.814241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.814251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.821352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.821381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.821392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.829470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.829502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.829512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.836986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.837014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.837025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.845378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.845402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.394 [2024-06-10 12:17:13.845414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.394 [2024-06-10 12:17:13.854563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.394 [2024-06-10 12:17:13.854587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.854597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.395 [2024-06-10 12:17:13.863336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.395 [2024-06-10 12:17:13.863360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.863371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.395 [2024-06-10 12:17:13.872119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.395 [2024-06-10 12:17:13.872143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.872154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.395 [2024-06-10 12:17:13.880969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.395 [2024-06-10 12:17:13.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.881006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.395 [2024-06-10 12:17:13.889486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.395 [2024-06-10 12:17:13.889510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.889521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.395 [2024-06-10 12:17:13.898086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.395 [2024-06-10 12:17:13.898111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.898121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.395 [2024-06-10 12:17:13.906507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.395 [2024-06-10 12:17:13.906532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.395 [2024-06-10 12:17:13.906543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.915505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.915530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.915541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.924928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.924952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.924962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.932897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.932920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.932931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.941882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.941907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.941917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.950289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.950312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.950323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.958611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.958638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.958648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.966329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.966353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.966363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.974089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.974112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.974122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.982229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.982253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.982267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.989813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.989836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.989848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:13.997942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:13.997965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:13.997976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:14.006372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:14.006396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:14.006407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:14.015897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:14.015921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:14.015932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:14.025053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.657 [2024-06-10 12:17:14.025077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.657 [2024-06-10 12:17:14.025088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.657 [2024-06-10 12:17:14.034006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.034030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.034041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.041653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.041678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.041689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.049207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.049230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.049241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.055708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.055732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.055742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.062301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.062325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.069285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.069309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.069320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.077592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.077618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.077629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.087290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.087314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.087325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.096547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.096571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.096582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.104048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.104071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.104082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.111250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.111273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.111283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.118499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.118522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.118536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.125014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.125038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.125049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.131427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.131449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.131459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.137361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.137385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.137396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.143492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.143514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.143525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.149937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.149972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.156454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.156484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.156495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.163529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.163552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.163563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.658 [2024-06-10 12:17:14.170508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.658 [2024-06-10 12:17:14.170531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.658 [2024-06-10 12:17:14.170542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.176787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.176818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.176828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.182519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.182544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.182556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.189170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.189194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.189204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.195473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.195502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.195512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.202078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.202102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.202113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.208130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.208155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.208166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.214882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.214904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.214915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.220807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.220830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.220841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.226921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.226946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.226956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.233472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.233500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.233510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.236900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.236923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.236934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.242612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.242636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.242647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.249340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.249364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.249375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.254859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.254882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.254893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.260771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.260795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.260805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.267113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.267137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.273078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.273100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.273110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.278858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.278880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.278894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.284634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.284657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.284667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.290208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.290230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.290241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.920 [2024-06-10 12:17:14.295571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.920 [2024-06-10 12:17:14.295594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.920 [2024-06-10 12:17:14.295604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.300883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.300906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.300916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.306356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.306379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.306389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.311679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.311701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.311711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.317027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.317050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.317060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.322333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.322355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.322366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.327558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.327584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.327594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.332783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.332805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.332817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.337873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.337897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.337907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.343192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.343215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.343226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.348452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.348474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.348490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.353739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.353763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.353773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.358846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.358869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.358879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.364083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.364105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.364116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.369039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.369061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.369072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.374191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.374223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.379401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.379423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.379433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.384636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.384658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.384668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.389791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.389812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.389822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.394910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.394931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.394941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.399872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.399893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.399903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.404838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.404860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.404870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.410066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.410086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.410097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.415142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.415168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.415178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.420284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.420307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.921 [2024-06-10 12:17:14.420317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.921 [2024-06-10 12:17:14.425469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.921 [2024-06-10 12:17:14.425497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.922 [2024-06-10 12:17:14.425507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.922 [2024-06-10 12:17:14.430544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.922 [2024-06-10 12:17:14.430566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.922 [2024-06-10 12:17:14.430576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.922 [2024-06-10 12:17:14.435682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:24.922 [2024-06-10 12:17:14.435705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.922 [2024-06-10 12:17:14.435715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.184 [2024-06-10 12:17:14.440840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.184 [2024-06-10 12:17:14.440863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.184 [2024-06-10 12:17:14.440873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.184 [2024-06-10 12:17:14.445952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.184 [2024-06-10 12:17:14.445975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.445985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.451092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.451115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.451125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.456259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.456281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.456292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.461212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.461234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.461244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.466386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.466408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.466418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.471520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.471542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.471552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.476738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.476761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.476771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.481980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.482003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.482013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.487539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.487561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.487572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.494643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.494664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.494675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.501596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.501618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.501629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.508960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.508983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.508997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.516715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.516738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.516748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.523821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.523844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.523855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.530266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.530289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.530300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.535967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.535990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.536000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.541705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.541727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.541737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.547633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.547655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.547665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.553739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.553764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.553774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.559760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.559783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.559793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.566177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.566204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.566214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.572830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.572859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.572869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.580373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.580396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.580407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.587049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.587071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.587081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.593475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.185 [2024-06-10 12:17:14.593504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.185 [2024-06-10 12:17:14.593514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.185 [2024-06-10 12:17:14.600293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.600316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.600327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.607266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.607289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.607300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.614216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.614238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.614249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.619398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.619421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.619432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.624794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.624817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.624828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.631677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.631700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.631710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.639344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.639368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.639378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.645601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.645624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.645635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.651967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.651990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.652000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.658446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.658468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.658485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.665118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.665142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.665153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.670908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.670931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.670942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.677040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.677064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.677077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.684491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.684513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.684523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.691597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.691620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.691631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.186 [2024-06-10 12:17:14.698724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.186 [2024-06-10 12:17:14.698748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.186 [2024-06-10 12:17:14.698759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.447 [2024-06-10 12:17:14.707260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.707285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.707296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.716391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.716415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.716426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.724452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.724480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.724491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.731519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.731541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.731551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.738675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.738698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.738708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.745584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.745609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.745620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.749446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.749467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.749484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.754742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.754765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.754776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.760473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.760501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.760511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.767095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.767118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.767128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.774045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.774074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.774084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.780809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.780832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.780842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.787285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.787307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.787318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.793710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.793732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.793746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.799101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.799135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.805416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.805438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.805449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.811768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.811791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.811803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.818606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.818628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.818639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.824964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.824987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.824998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.831862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.831884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.831895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.838082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.838105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.838116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.844281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.844304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.844314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.850748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.850775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.850785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.857008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.857032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.448 [2024-06-10 12:17:14.857043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.448 [2024-06-10 12:17:14.863862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.448 [2024-06-10 12:17:14.863886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.863897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.870855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.870878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.870888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.877314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.877338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.877349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.883711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.883734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.883745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.890033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.890055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.890065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.896682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.896705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.896716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.903204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.903227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.903238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.909577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.909599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.909610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.915420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.915444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.915455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.921562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.921584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.921595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.928362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.928386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.928397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.934498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.934521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.934532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.940295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.940317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.940329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.946514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.946538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.946549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.953305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.953329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.953340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.449 [2024-06-10 12:17:14.960235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.449 [2024-06-10 12:17:14.960275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.449 [2024-06-10 12:17:14.960292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:14.966935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:14.966959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:14.966970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:14.973255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:14.973278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:14.973288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:14.979633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:14.979657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:14.979668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:14.985843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:14.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:14.985876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:14.992364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:14.992387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:14.992397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:14.999160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:14.999183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:14.999193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:15.005988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:15.006012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:15.006023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.712 [2024-06-10 12:17:15.012665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.712 [2024-06-10 12:17:15.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.712 [2024-06-10 12:17:15.012698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.018619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.018646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.018656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.024647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.024671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.024681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.030247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.030271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.030281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.036397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.036420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.036431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.041982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.042004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.042014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.047606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.047628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.047639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.053234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.053257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.053267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.058535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.058558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.058568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.063839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.063862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.063873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.068815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.068838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.068849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.074272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.074295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.074306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.079415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.079448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.084500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.084522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.084532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.089609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.089631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.089641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.095130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.095153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.095163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.102112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.102136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.102147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.109679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.109703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.109713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.117100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.117124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.117137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.713 [2024-06-10 12:17:15.124091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.713 [2024-06-10 12:17:15.124115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.713 [2024-06-10 12:17:15.124126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.128564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.128586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.128596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.135719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.135743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.135755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.145083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.145107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.145117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.153571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.153594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.153605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.162429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.162453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.171860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.171884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.171895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.179620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.179643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.187560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.187583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.187593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.194287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.194310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.194321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.201133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.201156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.201166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.207861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.207894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.214472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.214500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.214510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.221165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.221187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.221197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.714 [2024-06-10 12:17:15.227688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.714 [2024-06-10 12:17:15.227710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.714 [2024-06-10 12:17:15.227721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.233834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.233856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.240185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.240207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.240221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.247107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.247130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.247140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.254040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.254063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.254074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.260913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.260937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.260947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.267310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.267343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.274727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.274751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.274762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.281392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.281415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.281426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.287774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.287797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.294154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.294177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.294187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.300606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.300632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.300642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.307040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.307063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.307073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.975 [2024-06-10 12:17:15.313097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b420) 00:28:25.975 [2024-06-10 12:17:15.313120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.975 [2024-06-10 12:17:15.313130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.975 00:28:25.975 Latency(us) 00:28:25.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.975 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:25.975 nvme0n1 : 2.04 4735.83 591.98 0.00 0.00 3311.87 435.81 44459.62 00:28:25.975 =================================================================================================================== 00:28:25.975 Total : 4735.83 591.98 0.00 0.00 3311.87 435.81 44459.62 00:28:25.975 0 00:28:25.975 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:25.975 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:25.975 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:25.975 | .driver_specific 00:28:25.975 | .nvme_error 00:28:25.975 | .status_code 00:28:25.975 | .command_transient_transport_error' 00:28:25.975 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 312 > 0 )) 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2371763 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2371763 ']' 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2371763 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2371763 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2371763' 00:28:26.235 killing process with pid 2371763 00:28:26.235 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2371763 00:28:26.235 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.235 00:28:26.235 Latency(us) 00:28:26.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.235 =================================================================================================================== 00:28:26.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.236 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2371763 00:28:26.495 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:26.495 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2372431 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2372431 /var/tmp/bperf.sock 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2372431 ']' 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:26.496 12:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.496 [2024-06-10 12:17:15.833389] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:26.496 [2024-06-10 12:17:15.833446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372431 ] 00:28:26.496 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.496 [2024-06-10 12:17:15.903273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.496 [2024-06-10 12:17:15.978814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.432 12:17:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.999 nvme0n1 00:28:27.999 12:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:27.999 12:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.999 12:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.999 12:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.999 12:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:27.999 12:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.999 Running I/O for 2 seconds... 00:28:27.999 [2024-06-10 12:17:17.388472] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fa7d8 00:28:27.999 [2024-06-10 12:17:17.389275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.999 [2024-06-10 12:17:17.389304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:27.999 [2024-06-10 12:17:17.397718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea680 00:28:27.999 [2024-06-10 12:17:17.398621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.999 [2024-06-10 12:17:17.398644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:27.999 [2024-06-10 12:17:17.406937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e9168 00:28:27.999 [2024-06-10 12:17:17.407953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.999 [2024-06-10 12:17:17.407974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.416118] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0ff8 00:28:28.000 [2024-06-10 12:17:17.417177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.417198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.423715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190feb58 00:28:28.000 [2024-06-10 12:17:17.424315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.424336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.432715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f1868 00:28:28.000 [2024-06-10 12:17:17.433507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.433528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.442741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f2948 00:28:28.000 [2024-06-10 12:17:17.443987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.444007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.451891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190efae0 00:28:28.000 [2024-06-10 12:17:17.453255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.453275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.461047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4140 00:28:28.000 [2024-06-10 12:17:17.462543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.462563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.467211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e01f8 00:28:28.000 [2024-06-10 12:17:17.467886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.467906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.475496] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4578 00:28:28.000 [2024-06-10 12:17:17.476164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.476184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.484598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0bc0 00:28:28.000 [2024-06-10 12:17:17.485373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.485393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.494301] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e6300 00:28:28.000 [2024-06-10 12:17:17.495130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.495150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.503120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fe2e8 00:28:28.000 [2024-06-10 12:17:17.503928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.503949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:28.000 [2024-06-10 12:17:17.511927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ebfd0 00:28:28.000 [2024-06-10 12:17:17.512613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.000 [2024-06-10 12:17:17.512633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.521117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eb760 00:28:28.259 [2024-06-10 12:17:17.522058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.522079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.529841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190de8a8 00:28:28.259 [2024-06-10 12:17:17.530756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.530777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.538732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eff18 00:28:28.259 [2024-06-10 12:17:17.539639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.539660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.546711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f1868 00:28:28.259 [2024-06-10 12:17:17.547705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.547726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.555832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f5378 00:28:28.259 [2024-06-10 12:17:17.556918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.564923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e5658 00:28:28.259 [2024-06-10 12:17:17.566128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.566148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.574029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df550 00:28:28.259 [2024-06-10 12:17:17.575345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.575366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.583159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df988 00:28:28.259 [2024-06-10 12:17:17.584573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.584593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.589296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fac10 00:28:28.259 [2024-06-10 12:17:17.589918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.589938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.597594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ef6a8 00:28:28.259 [2024-06-10 12:17:17.598217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.598240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.606709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ee190 00:28:28.259 [2024-06-10 12:17:17.607454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.607474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.615820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e0a68 00:28:28.259 [2024-06-10 12:17:17.616594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.616614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.624913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e73e0 00:28:28.259 [2024-06-10 12:17:17.625800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.625821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.633677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e8088 00:28:28.259 [2024-06-10 12:17:17.634641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.634662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.642850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e3060 00:28:28.259 [2024-06-10 12:17:17.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.643997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.652223] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eff18 00:28:28.259 [2024-06-10 12:17:17.653442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.653462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.661435] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6458 00:28:28.259 [2024-06-10 12:17:17.662747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.662767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.670726] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ee5c8 00:28:28.259 [2024-06-10 12:17:17.672076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.672096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.676923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ecc78 00:28:28.259 [2024-06-10 12:17:17.677460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.677487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.686040] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6458 00:28:28.259 [2024-06-10 12:17:17.686699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.686719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.694132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6890 00:28:28.259 [2024-06-10 12:17:17.694754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.694774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.702994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ec408 00:28:28.259 [2024-06-10 12:17:17.703617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.259 [2024-06-10 12:17:17.703637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:28.259 [2024-06-10 12:17:17.713472] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e0ea0 00:28:28.260 [2024-06-10 12:17:17.714559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.714579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.720558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e88f8 00:28:28.260 [2024-06-10 12:17:17.721089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.721110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.729519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e9e10 00:28:28.260 [2024-06-10 12:17:17.730231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.730251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.738346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ec408 00:28:28.260 [2024-06-10 12:17:17.739005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.739026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.748167] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1f80 00:28:28.260 [2024-06-10 12:17:17.749351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.749371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.755822] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7970 00:28:28.260 [2024-06-10 12:17:17.756343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.756363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.764883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea248 00:28:28.260 [2024-06-10 12:17:17.765641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.765661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.260 [2024-06-10 12:17:17.772909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f9f68 00:28:28.260 [2024-06-10 12:17:17.773757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.260 [2024-06-10 12:17:17.773777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.782287] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:28.519 [2024-06-10 12:17:17.783260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.783281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.791428] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190de038 00:28:28.519 [2024-06-10 12:17:17.792405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.792425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.800992] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed0b0 00:28:28.519 [2024-06-10 12:17:17.802154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.802176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.808095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6458 00:28:28.519 [2024-06-10 12:17:17.808722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.808743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.816779] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea680 00:28:28.519 [2024-06-10 12:17:17.817404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.817423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.825508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fdeb0 00:28:28.519 [2024-06-10 12:17:17.826135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.826154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.833623] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fc998 00:28:28.519 [2024-06-10 12:17:17.834329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.834349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.842449] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e23b8 00:28:28.519 [2024-06-10 12:17:17.843161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.843181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.851488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7538 00:28:28.519 [2024-06-10 12:17:17.852193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.852214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.861436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df550 00:28:28.519 [2024-06-10 12:17:17.862175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.862196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.869990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb8b8 00:28:28.519 [2024-06-10 12:17:17.870732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.870753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.878052] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eaab8 00:28:28.519 [2024-06-10 12:17:17.878903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.519 [2024-06-10 12:17:17.878923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.519 [2024-06-10 12:17:17.886948] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:28.520 [2024-06-10 12:17:17.887591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.887611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.894921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190feb58 00:28:28.520 [2024-06-10 12:17:17.895771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.895792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.904251] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e88f8 00:28:28.520 [2024-06-10 12:17:17.905218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.905241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.913512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f57b0 00:28:28.520 [2024-06-10 12:17:17.914579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.914599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.923211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e7c50 00:28:28.520 [2024-06-10 12:17:17.924385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.924406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.930335] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0ff8 00:28:28.520 [2024-06-10 12:17:17.930960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.930980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.938434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e6738 00:28:28.520 [2024-06-10 12:17:17.939142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.939161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.947543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f81e0 00:28:28.520 [2024-06-10 12:17:17.948336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.948356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.956623] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e0630 00:28:28.520 [2024-06-10 12:17:17.957547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.957567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.965714] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e8088 00:28:28.520 [2024-06-10 12:17:17.966765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.966785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.973770] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea248 00:28:28.520 [2024-06-10 12:17:17.974383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.974403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.982614] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190efae0 00:28:28.520 [2024-06-10 12:17:17.983093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.983113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:17.991631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e73e0 00:28:28.520 [2024-06-10 12:17:17.992241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:17.992261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:18.000729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e38d0 00:28:28.520 [2024-06-10 12:17:18.001455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:18.001481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:18.008962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f5be8 00:28:28.520 [2024-06-10 12:17:18.010250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:18.010270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:18.016445] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190feb58 00:28:28.520 [2024-06-10 12:17:18.017147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:18.017167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:18.025987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f92c0 00:28:28.520 [2024-06-10 12:17:18.026741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:18.026761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:28.520 [2024-06-10 12:17:18.035007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4578 00:28:28.520 [2024-06-10 12:17:18.036015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.520 [2024-06-10 12:17:18.036036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.043460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f4b08 00:28:28.779 [2024-06-10 12:17:18.044409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.044429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.052611] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ef270 00:28:28.779 [2024-06-10 12:17:18.053653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.053673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.061691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:28.779 [2024-06-10 12:17:18.062849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.062870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.070792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1f80 00:28:28.779 [2024-06-10 12:17:18.072071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.072091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.079890] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f8e88 00:28:28.779 [2024-06-10 12:17:18.081276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.081296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.088987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f5be8 00:28:28.779 [2024-06-10 12:17:18.090496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.090516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.095085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0ff8 00:28:28.779 [2024-06-10 12:17:18.095764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.095784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.105401] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb480 00:28:28.779 [2024-06-10 12:17:18.106557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.106576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.112372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fd208 00:28:28.779 [2024-06-10 12:17:18.112992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.113012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.121326] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7970 00:28:28.779 [2024-06-10 12:17:18.122140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.122159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.129559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ff3c8 00:28:28.779 [2024-06-10 12:17:18.130354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.130376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.138374] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e9e10 00:28:28.779 [2024-06-10 12:17:18.139179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.139199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.148763] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea248 00:28:28.779 [2024-06-10 12:17:18.149870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.149890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.156275] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb048 00:28:28.779 [2024-06-10 12:17:18.156783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.156804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.165185] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e38d0 00:28:28.779 [2024-06-10 12:17:18.165930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.165950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.174051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e38d0 00:28:28.779 [2024-06-10 12:17:18.174770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.174791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.182998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e6738 00:28:28.779 [2024-06-10 12:17:18.183914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.183934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.191227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f92c0 00:28:28.779 [2024-06-10 12:17:18.192145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.779 [2024-06-10 12:17:18.192165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:28.779 [2024-06-10 12:17:18.200350] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:28.780 [2024-06-10 12:17:18.201380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.201400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.209448] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e73e0 00:28:28.780 [2024-06-10 12:17:18.210596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.210616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.217094] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e01f8 00:28:28.780 [2024-06-10 12:17:18.217699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.217719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.226039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e88f8 00:28:28.780 [2024-06-10 12:17:18.226825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.226845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.234296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e95a0 00:28:28.780 [2024-06-10 12:17:18.235024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.235044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.243999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ef270 00:28:28.780 [2024-06-10 12:17:18.244830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.244851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.252090] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e3d08 00:28:28.780 [2024-06-10 12:17:18.252991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.253012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.261202] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6890 00:28:28.780 [2024-06-10 12:17:18.262218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.262238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.270317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fac10 00:28:28.780 [2024-06-10 12:17:18.271447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.271466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.279454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fc998 00:28:28.780 [2024-06-10 12:17:18.280705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.280725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.287493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fa3a0 00:28:28.780 [2024-06-10 12:17:18.288304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.288324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:28.780 [2024-06-10 12:17:18.296436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb048 00:28:28.780 [2024-06-10 12:17:18.297157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.780 [2024-06-10 12:17:18.297178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.305505] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4578 00:28:29.038 [2024-06-10 12:17:18.306430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.306449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.314469] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e88f8 00:28:29.038 [2024-06-10 12:17:18.315605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.315625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.322698] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1f80 00:28:29.038 [2024-06-10 12:17:18.323846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.323866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.331810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ee5c8 00:28:29.038 [2024-06-10 12:17:18.333054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.333075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.339834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df988 00:28:29.038 [2024-06-10 12:17:18.340626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.340646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.348686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e23b8 00:28:29.038 [2024-06-10 12:17:18.349373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.349394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.356896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fac10 00:28:29.038 [2024-06-10 12:17:18.358108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.358131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.364416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df118 00:28:29.038 [2024-06-10 12:17:18.365001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.365021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.373501] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:29.038 [2024-06-10 12:17:18.374258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.374278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.382616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ef6a8 00:28:29.038 [2024-06-10 12:17:18.383505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.383526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.391708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190dfdc0 00:28:29.038 [2024-06-10 12:17:18.392725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.392745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.401079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea680 00:28:29.038 [2024-06-10 12:17:18.402240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.402262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.410437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:29.038 [2024-06-10 12:17:18.411709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.411729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.419669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e3498 00:28:29.038 [2024-06-10 12:17:18.421022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.421042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.428858] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e6fa8 00:28:29.038 [2024-06-10 12:17:18.430327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.430347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.435055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e5ec8 00:28:29.038 [2024-06-10 12:17:18.435728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.435751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.443342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e27f0 00:28:29.038 [2024-06-10 12:17:18.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.444040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.452432] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eb328 00:28:29.038 [2024-06-10 12:17:18.453228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.453248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.461562] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e95a0 00:28:29.038 [2024-06-10 12:17:18.462459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.462482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.470652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e9168 00:28:29.038 [2024-06-10 12:17:18.471641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.471661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.479760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1f80 00:28:29.038 [2024-06-10 12:17:18.480890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.038 [2024-06-10 12:17:18.488854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eb328 00:28:29.038 [2024-06-10 12:17:18.490099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.038 [2024-06-10 12:17:18.490119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.497965] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df988 00:28:29.039 [2024-06-10 12:17:18.499314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.499334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.507074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e12d8 00:28:29.039 [2024-06-10 12:17:18.508544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.508564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.513211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190de470 00:28:29.039 [2024-06-10 12:17:18.513798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.513818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.522279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df550 00:28:29.039 [2024-06-10 12:17:18.523075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.523096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.532406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed4e8 00:28:29.039 [2024-06-10 12:17:18.533656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.533676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.541534] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190feb58 00:28:29.039 [2024-06-10 12:17:18.542877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.542897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.039 [2024-06-10 12:17:18.549650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7da8 00:28:29.039 [2024-06-10 12:17:18.550575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.039 [2024-06-10 12:17:18.550595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.557689] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4de8 00:28:29.297 [2024-06-10 12:17:18.558938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.558959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.565939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb8b8 00:28:29.297 [2024-06-10 12:17:18.566611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.566631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.574630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4140 00:28:29.297 [2024-06-10 12:17:18.575298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.575318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.583621] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7970 00:28:29.297 [2024-06-10 12:17:18.584397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.584417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.592210] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f35f0 00:28:29.297 [2024-06-10 12:17:18.592697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.592718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.601396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e3060 00:28:29.297 [2024-06-10 12:17:18.601977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.601996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.610272] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:29.297 [2024-06-10 12:17:18.611172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.611192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.620095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fd208 00:28:29.297 [2024-06-10 12:17:18.621413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.621433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.628129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f5378 00:28:29.297 [2024-06-10 12:17:18.629052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.629072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.636716] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f46d0 00:28:29.297 [2024-06-10 12:17:18.637727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.637747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.645393] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7100 00:28:29.297 [2024-06-10 12:17:18.646428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.646448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.297 [2024-06-10 12:17:18.654083] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f2948 00:28:29.297 [2024-06-10 12:17:18.655140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.297 [2024-06-10 12:17:18.655160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.663028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0350 00:28:29.298 [2024-06-10 12:17:18.664029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.664051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.671829] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fcdd0 00:28:29.298 [2024-06-10 12:17:18.672847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.672868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.680662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed4e8 00:28:29.298 [2024-06-10 12:17:18.681662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.681682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.689376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ef6a8 00:28:29.298 [2024-06-10 12:17:18.690358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.690378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.698069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f4298 00:28:29.298 [2024-06-10 12:17:18.699063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.699083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.706778] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4578 00:28:29.298 [2024-06-10 12:17:18.707702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.707722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.715510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ff3c8 00:28:29.298 [2024-06-10 12:17:18.716421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.716441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.724233] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e73e0 00:28:29.298 [2024-06-10 12:17:18.725260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.725279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.732966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed0b0 00:28:29.298 [2024-06-10 12:17:18.733885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.733905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.741650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e7c50 00:28:29.298 [2024-06-10 12:17:18.742681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.742701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.750353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fa7d8 00:28:29.298 [2024-06-10 12:17:18.751277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.751297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.759044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e6300 00:28:29.298 [2024-06-10 12:17:18.759954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.759974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.767056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7da8 00:28:29.298 [2024-06-10 12:17:18.767928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.767948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.776102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f57b0 00:28:29.298 [2024-06-10 12:17:18.777116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.777135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.785214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df118 00:28:29.298 [2024-06-10 12:17:18.786380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.786400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.794617] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed0b0 00:28:29.298 [2024-06-10 12:17:18.795892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.795912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.803800] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f8a50 00:28:29.298 [2024-06-10 12:17:18.805245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.805266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.298 [2024-06-10 12:17:18.809939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ec840 00:28:29.298 [2024-06-10 12:17:18.810506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.298 [2024-06-10 12:17:18.810527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.819305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190feb58 00:28:29.557 [2024-06-10 12:17:18.820067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.820088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.828388] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eaab8 00:28:29.557 [2024-06-10 12:17:18.829005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.829027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.838414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f8e88 00:28:29.557 [2024-06-10 12:17:18.839693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.839713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.847287] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e8088 00:28:29.557 [2024-06-10 12:17:18.848605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.854544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0ff8 00:28:29.557 [2024-06-10 12:17:18.855111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.855132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.863280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6020 00:28:29.557 [2024-06-10 12:17:18.863833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.863853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.872131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb480 00:28:29.557 [2024-06-10 12:17:18.872896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.872916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.881244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1b48 00:28:29.557 [2024-06-10 12:17:18.882124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.882145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.890335] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f1868 00:28:29.557 [2024-06-10 12:17:18.891324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.891347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.899456] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7da8 00:28:29.557 [2024-06-10 12:17:18.900564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.557 [2024-06-10 12:17:18.900584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.557 [2024-06-10 12:17:18.907586] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0788 00:28:29.558 [2024-06-10 12:17:18.908682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.908703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.916183] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e3498 00:28:29.558 [2024-06-10 12:17:18.917256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.917277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.924469] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1b48 00:28:29.558 [2024-06-10 12:17:18.925126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.933165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f8e88 00:28:29.558 [2024-06-10 12:17:18.933947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.933967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.942141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e2c28 00:28:29.558 [2024-06-10 12:17:18.942670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.942691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.951896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed4e8 00:28:29.558 [2024-06-10 12:17:18.952985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.953006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.960369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fa3a0 00:28:29.558 [2024-06-10 12:17:18.961119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.961140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.968591] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e6738 00:28:29.558 [2024-06-10 12:17:18.969844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.976557] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df550 00:28:29.558 [2024-06-10 12:17:18.977279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.977299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.986177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190eff18 00:28:29.558 [2024-06-10 12:17:18.987181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.987201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:18.996462] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e01f8 00:28:29.558 [2024-06-10 12:17:18.997963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:18.997983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.002630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ed920 00:28:29.558 [2024-06-10 12:17:19.003339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.003359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.011455] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f20d8 00:28:29.558 [2024-06-10 12:17:19.012115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.012136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.020159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f7538 00:28:29.558 [2024-06-10 12:17:19.020798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.020818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.029170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e73e0 00:28:29.558 [2024-06-10 12:17:19.030003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.030024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.037273] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ecc78 00:28:29.558 [2024-06-10 12:17:19.037993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.038013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.045681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e8088 00:28:29.558 [2024-06-10 12:17:19.046379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.046399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.055398] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fda78 00:28:29.558 [2024-06-10 12:17:19.056249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.056269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.064096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f81e0 00:28:29.558 [2024-06-10 12:17:19.064843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.064863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.558 [2024-06-10 12:17:19.073116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fdeb0 00:28:29.558 [2024-06-10 12:17:19.074104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.558 [2024-06-10 12:17:19.074125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.081503] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6890 00:28:29.817 [2024-06-10 12:17:19.082353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.082373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.089858] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e9e10 00:28:29.817 [2024-06-10 12:17:19.090586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.090606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.098989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f2510 00:28:29.817 [2024-06-10 12:17:19.099830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.099850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.108704] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190efae0 00:28:29.817 [2024-06-10 12:17:19.109680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.109700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.116797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb480 00:28:29.817 [2024-06-10 12:17:19.117815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.117837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.125931] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190df988 00:28:29.817 [2024-06-10 12:17:19.127015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.127035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.134041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fb8b8 00:28:29.817 [2024-06-10 12:17:19.134898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.134918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.142842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e1710 00:28:29.817 [2024-06-10 12:17:19.143450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.143470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.153296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f9b30 00:28:29.817 [2024-06-10 12:17:19.154801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.154821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.159436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190dece0 00:28:29.817 [2024-06-10 12:17:19.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.160191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.169674] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fd208 00:28:29.817 [2024-06-10 12:17:19.170423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.170443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.179782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f3e60 00:28:29.817 [2024-06-10 12:17:19.181271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.181290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.185975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ef6a8 00:28:29.817 [2024-06-10 12:17:19.186689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.186709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.196024] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e84c0 00:28:29.817 [2024-06-10 12:17:19.196773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.196793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.204045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fe720 00:28:29.817 [2024-06-10 12:17:19.205365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.817 [2024-06-10 12:17:19.205385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.817 [2024-06-10 12:17:19.211720] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e7c50 00:28:29.817 [2024-06-10 12:17:19.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.212424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.221540] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f6458 00:28:29.818 [2024-06-10 12:17:19.222277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.222297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.230513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e5658 00:28:29.818 [2024-06-10 12:17:19.231400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.231420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.239098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e8d30 00:28:29.818 [2024-06-10 12:17:19.239774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.239795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.249191] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f5be8 00:28:29.818 [2024-06-10 12:17:19.250552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.250572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.257292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f2948 00:28:29.818 [2024-06-10 12:17:19.258248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.258267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.265923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e49b0 00:28:29.818 [2024-06-10 12:17:19.266973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.266993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.274595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fa7d8 00:28:29.818 [2024-06-10 12:17:19.275617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.275637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.282675] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f1ca0 00:28:29.818 [2024-06-10 12:17:19.283623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.283643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.291811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fe2e8 00:28:29.818 [2024-06-10 12:17:19.292925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.292946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.300929] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e2c28 00:28:29.818 [2024-06-10 12:17:19.302194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.302213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.309047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f0bc0 00:28:29.818 [2024-06-10 12:17:19.309974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.309994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.317877] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e88f8 00:28:29.818 [2024-06-10 12:17:19.318596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.318616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.325733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190ea680 00:28:29.818 [2024-06-10 12:17:19.326631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.818 [2024-06-10 12:17:19.326651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.818 [2024-06-10 12:17:19.335535] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e4de8 00:28:30.076 [2024-06-10 12:17:19.336538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.076 [2024-06-10 12:17:19.336559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.076 [2024-06-10 12:17:19.344373] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f8a50 00:28:30.076 [2024-06-10 12:17:19.345441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.076 [2024-06-10 12:17:19.345462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.076 [2024-06-10 12:17:19.353101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190f4298 00:28:30.076 [2024-06-10 12:17:19.354130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.076 [2024-06-10 12:17:19.354150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.076 [2024-06-10 12:17:19.361808] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190e0630 00:28:30.076 [2024-06-10 12:17:19.362851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.076 [2024-06-10 12:17:19.362871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.076 [2024-06-10 12:17:19.370514] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x944920) with pdu=0x2000190fc128 00:28:30.076 [2024-06-10 12:17:19.371562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.076 [2024-06-10 12:17:19.371583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.076 00:28:30.076 Latency(us) 00:28:30.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.076 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.076 nvme0n1 : 2.00 29148.39 113.86 0.00 0.00 4385.51 1690.83 15518.92 00:28:30.076 =================================================================================================================== 00:28:30.076 Total : 29148.39 113.86 0.00 0.00 4385.51 1690.83 15518.92 00:28:30.076 0 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:30.076 | .driver_specific 00:28:30.076 | .nvme_error 00:28:30.076 | .status_code 00:28:30.076 | .command_transient_transport_error' 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2372431 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2372431 ']' 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2372431 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:30.076 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2372431 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2372431' 00:28:30.334 killing process with pid 2372431 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2372431 00:28:30.334 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.334 00:28:30.334 Latency(us) 00:28:30.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.334 =================================================================================================================== 00:28:30.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2372431 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2373117 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2373117 /var/tmp/bperf.sock 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2373117 ']' 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:30.334 12:17:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.334 [2024-06-10 12:17:19.854055] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:30.334 [2024-06-10 12:17:19.854107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373117 ] 00:28:30.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.334 Zero copy mechanism will not be used. 00:28:30.591 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.592 [2024-06-10 12:17:19.923911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.592 [2024-06-10 12:17:19.998609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.156 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:31.156 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:31.156 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:31.156 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:31.414 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:31.414 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.414 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.414 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.414 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.414 12:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.671 nvme0n1 00:28:31.671 12:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:31.671 12:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.671 12:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.671 12:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.671 12:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:31.671 12:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:31.671 Zero copy mechanism will not be used. 00:28:31.671 Running I/O for 2 seconds... 00:28:31.930 [2024-06-10 12:17:21.201001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.201359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.201388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.206975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.207308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.207333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.213014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.213352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.213376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.218739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.219083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.219106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.224120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.224449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.224471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.229164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.229512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.229534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.235141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.235484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.235507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.242778] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.243115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.243136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.249215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.249558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.249580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.255274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.255613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.255634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.261210] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.261533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.930 [2024-06-10 12:17:21.261554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.930 [2024-06-10 12:17:21.267529] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.930 [2024-06-10 12:17:21.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.267900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.273732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.274074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.274095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.280619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.280941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.280962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.287392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.287729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.287750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.293411] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.293483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.293504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.300384] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.300716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.300737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.307164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.307490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.307510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.313679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.314033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.314054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.320613] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.320934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.320956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.326917] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.327254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.327274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.333149] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.333486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.333506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.339857] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.339924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.346367] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.346701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.346725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.352768] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.353091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.353111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.359601] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.359962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.359983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.366887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.367240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.367261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.373417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.373758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.373779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.379547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.379886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.379907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.386358] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.386709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.386730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.392104] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.392426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.392446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.397519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.397854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.397874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.402774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.403109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.403130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.408236] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.408565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.408586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.413359] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.413723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.418185] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.418525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.418546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.423032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.423355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.423376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.427864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.428194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.428215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.432795] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.931 [2024-06-10 12:17:21.433119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.931 [2024-06-10 12:17:21.433140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.931 [2024-06-10 12:17:21.438188] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.932 [2024-06-10 12:17:21.438539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.932 [2024-06-10 12:17:21.438576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.932 [2024-06-10 12:17:21.443424] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.932 [2024-06-10 12:17:21.443761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.932 [2024-06-10 12:17:21.443781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.932 [2024-06-10 12:17:21.448549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:31.932 [2024-06-10 12:17:21.448884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.932 [2024-06-10 12:17:21.448906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.453523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.453868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.453889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.458529] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.458866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.458887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.463907] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.464233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.464254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.469798] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.470147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.470168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.475722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.476061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.476082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.481554] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.481898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.481919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.488016] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.488355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.488376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.494833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.495155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.495180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.500725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.501053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.501074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.507564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.507922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.507943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.514161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.514227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.514246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.520084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.520418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.520439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.525341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.525673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.525694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.530512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.530862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.530882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.535700] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.536039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.536060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.540770] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.541105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.192 [2024-06-10 12:17:21.541126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.192 [2024-06-10 12:17:21.545812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.192 [2024-06-10 12:17:21.546143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.546163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.550733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.551073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.551094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.555634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.555980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.556000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.560398] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.560723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.560744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.565223] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.565567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.565588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.571118] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.571449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.576764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.577104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.577125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.582455] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.582798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.582819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.587827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.588148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.588169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.592713] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.593045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.593066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.597628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.597959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.597980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.602392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.602744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.602764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.607695] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.608028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.608049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.612764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.613089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.613109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.617833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.618184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.622622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.622970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.622991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.627324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.627655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.627676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.632101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.632437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.632461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.636835] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.637164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.637185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.641543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.641861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.641882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.646268] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.646599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.646619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.651265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.651592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.651613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.656461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.656818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.656839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.661196] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.661517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.661538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.665967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.666304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.666325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.670745] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.671067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.671088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.675450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.675790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.193 [2024-06-10 12:17:21.675811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.193 [2024-06-10 12:17:21.680338] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.193 [2024-06-10 12:17:21.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.194 [2024-06-10 12:17:21.680686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.194 [2024-06-10 12:17:21.685022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.194 [2024-06-10 12:17:21.685351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.194 [2024-06-10 12:17:21.685371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.194 [2024-06-10 12:17:21.689991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.194 [2024-06-10 12:17:21.690320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.194 [2024-06-10 12:17:21.690341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.194 [2024-06-10 12:17:21.695201] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.194 [2024-06-10 12:17:21.695537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.194 [2024-06-10 12:17:21.695558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.194 [2024-06-10 12:17:21.700275] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.194 [2024-06-10 12:17:21.700612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.194 [2024-06-10 12:17:21.700634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.194 [2024-06-10 12:17:21.705049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.194 [2024-06-10 12:17:21.705398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.194 [2024-06-10 12:17:21.705419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.710015] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.710349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.710371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.714727] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.715068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.715088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.719875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.720210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.720232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.725160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.725496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.725517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.730002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.730340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.730361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.734769] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.735101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.735122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.739527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.739873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.739894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.744261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.744617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.744638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.749094] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.749429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.749449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.753995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.754324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.754344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.758739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.759061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.759085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.763461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.763807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.763828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.768272] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.768630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.454 [2024-06-10 12:17:21.773083] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.454 [2024-06-10 12:17:21.773413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.454 [2024-06-10 12:17:21.773435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.778213] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.778545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.778565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.783384] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.783704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.783725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.788390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.788716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.788738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.793054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.793387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.793408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.797772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.798123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.802501] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.802825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.802845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.807348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.807692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.807713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.812800] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.813122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.813143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.817381] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.817720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.817741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.821947] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.822278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.822299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.826467] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.826804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.826825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.830997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.831333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.831354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.835566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.835895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.835916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.840211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.840567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.840587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.844810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.845146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.845168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.849840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.850186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.850207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.854581] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.854905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.854926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.859372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.859714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.859736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.864182] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.864519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.864539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.869018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.869350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.869371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.873719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.874047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.874068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.879228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.879557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.879578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.884383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.884710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.884734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.889460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.889808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.889829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.894937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.895284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.895304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.901106] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.901172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.455 [2024-06-10 12:17:21.901191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.455 [2024-06-10 12:17:21.907968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.455 [2024-06-10 12:17:21.908302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.908323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.913974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.914308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.914328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.919964] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.920284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.920305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.926628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.926966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.926987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.932831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.933160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.933180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.938996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.939330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.939350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.945201] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.945539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.945560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.951778] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.952123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.952143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.958592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.958951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.958974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.965179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.456 [2024-06-10 12:17:21.965540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.456 [2024-06-10 12:17:21.965561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.456 [2024-06-10 12:17:21.971672] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:21.972021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:21.972042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:21.978638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:21.978997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:21.979018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:21.985261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:21.985606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:21.985627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:21.992625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:21.992996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:21.993020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:21.999289] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:21.999629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:21.999650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:22.005835] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:22.006180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:22.006200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:22.012494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:22.012854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:22.012876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.714 [2024-06-10 12:17:22.019252] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.714 [2024-06-10 12:17:22.019612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.714 [2024-06-10 12:17:22.019633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.026050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.026386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.032379] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.032732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.032753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.038919] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.039256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.039277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.045205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.045564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.045585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.051259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.051606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.051627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.057360] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.057690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.057711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.063937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.064265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.064285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.070525] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.070865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.070885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.076759] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.077099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.083424] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.083790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.090335] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.090685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.090705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.096991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.097338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.097359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.103564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.103898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.103918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.110620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.110965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.110987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.116792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.117132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.117153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.122722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.123060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.123080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.128122] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.128463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.128490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.133024] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.133355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.133376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.137949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.138305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.142822] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.143155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.143176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.147610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.147947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.152348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.152680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.152704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.157112] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.157463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.161882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.162201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.162221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.166646] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.166979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.167000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.171495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.171826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.171847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.176318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.176651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.715 [2024-06-10 12:17:22.176672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.715 [2024-06-10 12:17:22.181938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.715 [2024-06-10 12:17:22.182273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.182293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.189305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.189636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.189658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.195589] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.195925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.195945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.201572] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.201926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.201947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.206507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.206838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.206860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.211850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.212213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.212235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.216777] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.217110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.217131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.221531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.221863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.221884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.226616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.226958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.226978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.716 [2024-06-10 12:17:22.232121] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.716 [2024-06-10 12:17:22.232459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.716 [2024-06-10 12:17:22.232486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.237774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.238106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.238126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.243605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.243935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.243956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.249638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.249987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.250008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.255873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.256230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.262162] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.262495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.262516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.268763] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.269113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.269134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.275049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.275142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.281465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.281808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.281829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.288181] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.288524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.288545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.294872] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.295201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.295222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.300832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.301163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.301187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.306955] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.307285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.307306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.313228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.313558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.313579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.319601] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.319949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.319970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.325964] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.326299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.326320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.974 [2024-06-10 12:17:22.333206] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.974 [2024-06-10 12:17:22.333558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.974 [2024-06-10 12:17:22.333579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.339483] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.339807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.339828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.345747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.346087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.346108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.352602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.352951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.352971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.359347] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.359697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.359718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.365995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.366324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.366345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.372452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.372526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.372546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.379170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.379540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.379561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.385380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.385736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.392030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.392365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.392385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.398673] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.399004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.399025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.404864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.405187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.405208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.410555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.410895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.410916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.415825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.416177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.420825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.421159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.421180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.425742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.426068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.426088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.431261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.431587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.431607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.437124] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.437489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.437510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.444256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.444592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.444612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.450092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.450409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.450429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.455722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.456052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.456073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.461187] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.461527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.461551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.466799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.467146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.467167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.471842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.472185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.472206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.476848] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.477191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.477212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.481758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.482100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.482120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.486694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.487029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.975 [2024-06-10 12:17:22.487051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.975 [2024-06-10 12:17:22.492291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:32.975 [2024-06-10 12:17:22.492637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.976 [2024-06-10 12:17:22.492659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.497610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.497947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.497968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.504025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.504355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.504376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.510276] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.510624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.510645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.516734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.517057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.517078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.522859] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.523191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.523212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.529011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.529348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.529368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.535415] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.535512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.535531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.541830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.542164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.542184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.548122] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.548213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.548232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.554350] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.554684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.554705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.560656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.560985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.561005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.565920] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.566247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.566266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.571014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.571345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.571366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.576045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.576393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.580974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.581311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.581333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.585755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.586098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.586119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.590549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.234 [2024-06-10 12:17:22.590875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.234 [2024-06-10 12:17:22.590895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.234 [2024-06-10 12:17:22.595452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.595777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.595798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.600787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.601118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.601139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.605693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.606014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.606039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.610500] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.610840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.610861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.615205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.615539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.615560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.619870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.620198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.620219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.624538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.624865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.624886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.629818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.630137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.630158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.636736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.637082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.637102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.644234] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.644586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.644608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.651595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.651950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.651970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.657487] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.657826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.657847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.663167] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.663500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.663521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.668551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.668889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.668909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.674408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.674747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.674768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.681473] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.681822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.681843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.689278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.689610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.689631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.697395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.697749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.697770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.705180] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.705529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.705550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.713251] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.713580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.713601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.719715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.720072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.720093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.727244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.727594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.727614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.735114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.735451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.735471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.741635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.742001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.742021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.235 [2024-06-10 12:17:22.749871] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.235 [2024-06-10 12:17:22.750226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.235 [2024-06-10 12:17:22.750247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.494 [2024-06-10 12:17:22.758256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.494 [2024-06-10 12:17:22.758605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.494 [2024-06-10 12:17:22.758626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.494 [2024-06-10 12:17:22.764520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.494 [2024-06-10 12:17:22.764843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.494 [2024-06-10 12:17:22.764864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.770227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.770569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.770589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.775633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.775983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.780579] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.780924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.780944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.785635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.785970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.785991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.790470] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.790796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.790817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.795409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.795757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.795778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.800771] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.801097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.801117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.805799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.806132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.810629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.810981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.811001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.815447] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.815770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.820280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.820623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.820644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.825107] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.825431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.825452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.829833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.830146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.830168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.834549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.834878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.834899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.839233] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.839583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.839604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.843989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.844341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.844362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.848687] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.849025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.849045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.853405] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.853757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.859020] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.859350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.859371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.864909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.865251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.865272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.871604] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.871959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.871980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.877611] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.877944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.877964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.883168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.883502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.888782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.889137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.889158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.893966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.894293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.894314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.899832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.900182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.900202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.906097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.495 [2024-06-10 12:17:22.906445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.495 [2024-06-10 12:17:22.906465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.495 [2024-06-10 12:17:22.912199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.912526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.912550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.918279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.918609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.918630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.924966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.925318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.925338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.930631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.930982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.931002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.936255] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.936590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.936610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.942263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.942605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.942625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.949163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.949514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.949534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.954502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.954839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.954860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.959517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.959843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.959864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.964622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.964962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.964982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.969595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.969945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.969965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.974439] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.974776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.974797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.979233] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.979582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.979603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.983997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.984324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.984344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.988703] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.989032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.989053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.993502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.993823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.993844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:22.998237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:22.998569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:22.998590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:23.002905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:23.003236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:23.003260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.496 [2024-06-10 12:17:23.007817] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.496 [2024-06-10 12:17:23.008141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.496 [2024-06-10 12:17:23.008162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.754 [2024-06-10 12:17:23.013291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.754 [2024-06-10 12:17:23.013650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.754 [2024-06-10 12:17:23.013672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.754 [2024-06-10 12:17:23.018342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.754 [2024-06-10 12:17:23.018685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.754 [2024-06-10 12:17:23.018706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.754 [2024-06-10 12:17:23.023949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.754 [2024-06-10 12:17:23.024294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.754 [2024-06-10 12:17:23.024315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.754 [2024-06-10 12:17:23.029556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.754 [2024-06-10 12:17:23.029904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.029924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.034948] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.035279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.035299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.041082] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.041408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.041428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.046955] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.047040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.047059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.053607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.053952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.053972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.060431] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.060775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.060796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.066113] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.066451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.066472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.071250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.071595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.071615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.076731] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.077067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.077087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.083471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.083836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.083857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.091085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.091427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.091448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.099046] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.099395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.099415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.107097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.107458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.107485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.115314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.115675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.115695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.123513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.123885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.123905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.131447] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.131779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.131799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.139598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.139937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.139957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.147485] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.147831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.147851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.155659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.156014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.156034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.163876] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.164207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.164227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.172667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.173006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.173026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.180390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.180817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.180843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.755 [2024-06-10 12:17:23.188629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x77b720) with pdu=0x2000190fef90 00:28:33.755 [2024-06-10 12:17:23.188963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.755 [2024-06-10 12:17:23.188983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.755 00:28:33.755 Latency(us) 00:28:33.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.755 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:33.755 nvme0n1 : 2.00 5351.80 668.98 0.00 0.00 2984.46 2110.26 10171.19 00:28:33.755 =================================================================================================================== 00:28:33.755 Total : 5351.80 668.98 0.00 0.00 2984.46 2110.26 10171.19 00:28:33.755 0 00:28:33.755 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:33.755 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:33.755 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:33.755 | .driver_specific 00:28:33.755 | .nvme_error 00:28:33.755 | .status_code 00:28:33.755 | .command_transient_transport_error' 00:28:33.755 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2373117 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2373117 ']' 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2373117 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2373117 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2373117' 00:28:34.013 killing process with pid 2373117 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2373117 00:28:34.013 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.013 00:28:34.013 Latency(us) 00:28:34.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.013 =================================================================================================================== 00:28:34.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.013 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2373117 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2370935 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2370935 ']' 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2370935 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370935 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370935' 00:28:34.270 killing process with pid 2370935 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2370935 00:28:34.270 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2370935 00:28:34.528 00:28:34.528 real 0m16.736s 00:28:34.528 user 0m31.344s 00:28:34.529 sys 0m5.199s 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.529 ************************************ 00:28:34.529 END TEST nvmf_digest_error 00:28:34.529 ************************************ 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.529 rmmod nvme_tcp 00:28:34.529 rmmod nvme_fabrics 00:28:34.529 rmmod nvme_keyring 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2370935 ']' 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2370935 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 2370935 ']' 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 2370935 00:28:34.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2370935) - No such process 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 2370935 is not found' 00:28:34.529 Process with pid 2370935 is not found 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.529 12:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.059 12:17:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.059 00:28:37.059 real 0m42.875s 00:28:37.059 user 1m4.992s 00:28:37.059 sys 0m15.838s 00:28:37.059 12:17:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:37.059 12:17:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.059 ************************************ 00:28:37.059 END TEST nvmf_digest 00:28:37.059 ************************************ 00:28:37.059 12:17:26 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:37.059 12:17:26 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:37.059 12:17:26 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:37.059 12:17:26 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:37.059 12:17:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:37.059 12:17:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:37.059 12:17:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.059 ************************************ 00:28:37.059 START TEST nvmf_bdevperf 00:28:37.059 ************************************ 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:37.059 * Looking for test storage... 00:28:37.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.059 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.060 12:17:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.703 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.704 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.704 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.704 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:28:43.704 00:28:43.704 --- 10.0.0.2 ping statistics --- 00:28:43.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.704 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:43.704 00:28:43.704 --- 10.0.0.1 ping statistics --- 00:28:43.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.704 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2377364 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2377364 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2377364 ']' 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.704 12:17:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:43.704 [2024-06-10 12:17:32.873559] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:43.704 [2024-06-10 12:17:32.873608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.704 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.704 [2024-06-10 12:17:32.946737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.704 [2024-06-10 12:17:33.018605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.704 [2024-06-10 12:17:33.018654] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.704 [2024-06-10 12:17:33.018663] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.704 [2024-06-10 12:17:33.018671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.704 [2024-06-10 12:17:33.018678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.704 [2024-06-10 12:17:33.018781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.704 [2024-06-10 12:17:33.018864] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.704 [2024-06-10 12:17:33.018866] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.269 [2024-06-10 12:17:33.710307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.269 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.270 Malloc0 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:44.270 [2024-06-10 12:17:33.768410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:44.270 { 00:28:44.270 "params": { 00:28:44.270 "name": "Nvme$subsystem", 00:28:44.270 "trtype": "$TEST_TRANSPORT", 00:28:44.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.270 "adrfam": "ipv4", 00:28:44.270 "trsvcid": "$NVMF_PORT", 00:28:44.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.270 "hdgst": ${hdgst:-false}, 00:28:44.270 "ddgst": ${ddgst:-false} 00:28:44.270 }, 00:28:44.270 "method": "bdev_nvme_attach_controller" 00:28:44.270 } 00:28:44.270 EOF 00:28:44.270 )") 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:44.270 12:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:44.270 "params": { 00:28:44.270 "name": "Nvme1", 00:28:44.270 "trtype": "tcp", 00:28:44.270 "traddr": "10.0.0.2", 00:28:44.270 "adrfam": "ipv4", 00:28:44.270 "trsvcid": "4420", 00:28:44.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.270 "hdgst": false, 00:28:44.270 "ddgst": false 00:28:44.270 }, 00:28:44.270 "method": "bdev_nvme_attach_controller" 00:28:44.270 }' 00:28:44.527 [2024-06-10 12:17:33.820333] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:44.527 [2024-06-10 12:17:33.820386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377610 ] 00:28:44.527 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.527 [2024-06-10 12:17:33.890769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.527 [2024-06-10 12:17:33.959806] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.784 Running I/O for 1 seconds... 00:28:46.157 00:28:46.157 Latency(us) 00:28:46.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.157 Verification LBA range: start 0x0 length 0x4000 00:28:46.157 Nvme1n1 : 1.01 11563.51 45.17 0.00 0.00 11029.95 1120.67 16567.50 00:28:46.157 =================================================================================================================== 00:28:46.157 Total : 11563.51 45.17 0.00 0.00 11029.95 1120.67 16567.50 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2377887 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.157 { 00:28:46.157 "params": { 00:28:46.157 "name": "Nvme$subsystem", 00:28:46.157 "trtype": "$TEST_TRANSPORT", 00:28:46.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.157 "adrfam": "ipv4", 00:28:46.157 "trsvcid": "$NVMF_PORT", 00:28:46.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.157 "hdgst": ${hdgst:-false}, 00:28:46.157 "ddgst": ${ddgst:-false} 00:28:46.157 }, 00:28:46.157 "method": "bdev_nvme_attach_controller" 00:28:46.157 } 00:28:46.157 EOF 00:28:46.157 )") 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:46.157 12:17:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:46.157 "params": { 00:28:46.157 "name": "Nvme1", 00:28:46.157 "trtype": "tcp", 00:28:46.157 "traddr": "10.0.0.2", 00:28:46.157 "adrfam": "ipv4", 00:28:46.157 "trsvcid": "4420", 00:28:46.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:46.157 "hdgst": false, 00:28:46.157 "ddgst": false 00:28:46.157 }, 00:28:46.157 "method": "bdev_nvme_attach_controller" 00:28:46.157 }' 00:28:46.157 [2024-06-10 12:17:35.469881] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:46.157 [2024-06-10 12:17:35.469944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377887 ] 00:28:46.157 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.157 [2024-06-10 12:17:35.541710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.157 [2024-06-10 12:17:35.610574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.417 Running I/O for 15 seconds... 00:28:48.943 12:17:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2377364 00:28:48.943 12:17:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:48.943 [2024-06-10 12:17:38.445018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.943 [2024-06-10 12:17:38.445530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.943 [2024-06-10 12:17:38.445539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.445984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.445993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.944 [2024-06-10 12:17:38.446415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.944 [2024-06-10 12:17:38.446426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.446982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.446991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.945 [2024-06-10 12:17:38.447205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.945 [2024-06-10 12:17:38.447216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.946 [2024-06-10 12:17:38.447739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.946 [2024-06-10 12:17:38.447759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1874600 is same with the state(5) to be set 00:28:48.946 [2024-06-10 12:17:38.447780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:48.946 [2024-06-10 12:17:38.447789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:48.946 [2024-06-10 12:17:38.447806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115272 len:8 PRP1 0x0 PRP2 0x0 00:28:48.946 [2024-06-10 12:17:38.447816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447863] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1874600 was disconnected and freed. reset controller. 00:28:48.946 [2024-06-10 12:17:38.447908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.946 [2024-06-10 12:17:38.447919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.946 [2024-06-10 12:17:38.447938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.946 [2024-06-10 12:17:38.447956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.946 [2024-06-10 12:17:38.447975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.946 [2024-06-10 12:17:38.447984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:48.946 [2024-06-10 12:17:38.450625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.946 [2024-06-10 12:17:38.450650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:48.946 [2024-06-10 12:17:38.451266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.946 [2024-06-10 12:17:38.451284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:48.946 [2024-06-10 12:17:38.451294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:48.946 [2024-06-10 12:17:38.451459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:48.946 [2024-06-10 12:17:38.451638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.946 [2024-06-10 12:17:38.451648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.946 [2024-06-10 12:17:38.451658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.946 [2024-06-10 12:17:38.454259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.206 [2024-06-10 12:17:38.463735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.206 [2024-06-10 12:17:38.464093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.206 [2024-06-10 12:17:38.464111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.206 [2024-06-10 12:17:38.464120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.206 [2024-06-10 12:17:38.464277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.206 [2024-06-10 12:17:38.464434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.206 [2024-06-10 12:17:38.464443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.206 [2024-06-10 12:17:38.464452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.206 [2024-06-10 12:17:38.467001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.476507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.476865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.476883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.476892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.477048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.477205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.477214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.477222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.479690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.489266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.489691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.489746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.489778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.490241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.490398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.490407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.490416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.492957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.501953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.502439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.502503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.502536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.503124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.503497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.503508] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.503517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.505985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.514674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.515028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.515044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.515056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.515212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.515369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.515378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.515386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.517846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.527403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.527797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.527814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.527823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.527978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.528136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.528145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.528153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.530612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.540102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.540591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.540645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.540678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.541132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.541289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.541298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.541307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.543766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.552750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.553114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.553165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.553197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.553797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.554345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.554357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.554365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.556908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.565456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.565840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.565858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.565867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.566032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.566197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.566207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.566216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.568705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.578127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.578533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.578574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.578607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.579141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.579298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.579307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.579316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.581777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.590904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.591262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.591279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.591288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.591443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.591625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.591635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.591644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.594157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.603570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.603859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.603874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.603883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.604038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.604195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.604205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.604213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.606671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.616226] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.616671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.616722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.616754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.617266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.617423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.617433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.617441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.620031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.628871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.629306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.629357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.629389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.629991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.630515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.630525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.630534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.633049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.641601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.207 [2024-06-10 12:17:38.642009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.207 [2024-06-10 12:17:38.642026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.207 [2024-06-10 12:17:38.642035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.207 [2024-06-10 12:17:38.642193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.207 [2024-06-10 12:17:38.642350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.207 [2024-06-10 12:17:38.642359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.207 [2024-06-10 12:17:38.642367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.207 [2024-06-10 12:17:38.644827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.207 [2024-06-10 12:17:38.654239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.208 [2024-06-10 12:17:38.654521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.208 [2024-06-10 12:17:38.654538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.208 [2024-06-10 12:17:38.654547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.208 [2024-06-10 12:17:38.654703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.208 [2024-06-10 12:17:38.654860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.208 [2024-06-10 12:17:38.654869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.208 [2024-06-10 12:17:38.654877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.208 [2024-06-10 12:17:38.657331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.208 [2024-06-10 12:17:38.666893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.208 [2024-06-10 12:17:38.667257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.208 [2024-06-10 12:17:38.667273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.208 [2024-06-10 12:17:38.667282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.208 [2024-06-10 12:17:38.667437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.208 [2024-06-10 12:17:38.667600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.208 [2024-06-10 12:17:38.667610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.208 [2024-06-10 12:17:38.667618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.208 [2024-06-10 12:17:38.670068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.208 [2024-06-10 12:17:38.679635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.208 [2024-06-10 12:17:38.680083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.208 [2024-06-10 12:17:38.680135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.208 [2024-06-10 12:17:38.680167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.208 [2024-06-10 12:17:38.680684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.208 [2024-06-10 12:17:38.680841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.208 [2024-06-10 12:17:38.680851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.208 [2024-06-10 12:17:38.680863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.208 [2024-06-10 12:17:38.683316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.208 [2024-06-10 12:17:38.692292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.208 [2024-06-10 12:17:38.692723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.208 [2024-06-10 12:17:38.692740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.208 [2024-06-10 12:17:38.692749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.208 [2024-06-10 12:17:38.692905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.208 [2024-06-10 12:17:38.693062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.208 [2024-06-10 12:17:38.693072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.208 [2024-06-10 12:17:38.693081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.208 [2024-06-10 12:17:38.695539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.208 [2024-06-10 12:17:38.705267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.208 [2024-06-10 12:17:38.705662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.208 [2024-06-10 12:17:38.705680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.208 [2024-06-10 12:17:38.705690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.208 [2024-06-10 12:17:38.705859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.208 [2024-06-10 12:17:38.706028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.208 [2024-06-10 12:17:38.706038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.208 [2024-06-10 12:17:38.706047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.208 [2024-06-10 12:17:38.708715] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.208 [2024-06-10 12:17:38.718154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.208 [2024-06-10 12:17:38.718535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.208 [2024-06-10 12:17:38.718587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.208 [2024-06-10 12:17:38.718620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.208 [2024-06-10 12:17:38.719208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.208 [2024-06-10 12:17:38.719753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.208 [2024-06-10 12:17:38.719764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.208 [2024-06-10 12:17:38.719773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.208 [2024-06-10 12:17:38.722455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.731102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.731519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.731536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.731545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.731709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.731874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.468 [2024-06-10 12:17:38.731884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.468 [2024-06-10 12:17:38.731892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.468 [2024-06-10 12:17:38.734487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.743833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.744251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.744302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.744334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.744832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.744989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.468 [2024-06-10 12:17:38.744998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.468 [2024-06-10 12:17:38.745006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.468 [2024-06-10 12:17:38.747459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.756515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.756952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.757003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.757035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.757632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.757869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.468 [2024-06-10 12:17:38.757883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.468 [2024-06-10 12:17:38.757895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.468 [2024-06-10 12:17:38.761618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.769697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.770124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.770140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.770150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.770306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.770465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.468 [2024-06-10 12:17:38.770474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.468 [2024-06-10 12:17:38.770489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.468 [2024-06-10 12:17:38.773023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.782432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.782867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.782884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.782893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.783049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.783205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.468 [2024-06-10 12:17:38.783214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.468 [2024-06-10 12:17:38.783223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.468 [2024-06-10 12:17:38.785682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.795093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.795501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.795520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.795529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.795710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.795874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.468 [2024-06-10 12:17:38.795884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.468 [2024-06-10 12:17:38.795893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.468 [2024-06-10 12:17:38.798546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.468 [2024-06-10 12:17:38.807979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.468 [2024-06-10 12:17:38.808333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.468 [2024-06-10 12:17:38.808398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.468 [2024-06-10 12:17:38.808430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.468 [2024-06-10 12:17:38.808973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.468 [2024-06-10 12:17:38.809142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.809152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.809161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.811829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.820950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.821384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.821434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.821466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.821999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.822165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.822175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.822183] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.824777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.833632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.834102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.834153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.834185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.834792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.835222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.835232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.835241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.837789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.846386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.846763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.846780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.846789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.846953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.847118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.847128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.847137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.849663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.859207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.859668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.859720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.859760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.860298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.860455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.860465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.860473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.863015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.871931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.872282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.872298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.872308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.872472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.872642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.872652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.872661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.875234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.884586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.885024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.885042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.885051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.885216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.885380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.885390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.885399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.887924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.897269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.897649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.897667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.897676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.897841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.898007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.898020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.898029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.900555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.909910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.910216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.910233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.910243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.910407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.910578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.910588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.910597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.913114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.922571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.923005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.923056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.923088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.923686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.924229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.924239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.924248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.469 [2024-06-10 12:17:38.926772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.469 [2024-06-10 12:17:38.935252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.469 [2024-06-10 12:17:38.935680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.469 [2024-06-10 12:17:38.935698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.469 [2024-06-10 12:17:38.935708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.469 [2024-06-10 12:17:38.935872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.469 [2024-06-10 12:17:38.936037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.469 [2024-06-10 12:17:38.936047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.469 [2024-06-10 12:17:38.936056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.470 [2024-06-10 12:17:38.938577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.470 [2024-06-10 12:17:38.947929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.470 [2024-06-10 12:17:38.948356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.470 [2024-06-10 12:17:38.948373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.470 [2024-06-10 12:17:38.948382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.470 [2024-06-10 12:17:38.948556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.470 [2024-06-10 12:17:38.948725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.470 [2024-06-10 12:17:38.948735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.470 [2024-06-10 12:17:38.948745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.470 [2024-06-10 12:17:38.951406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.470 [2024-06-10 12:17:38.960823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.470 [2024-06-10 12:17:38.961203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.470 [2024-06-10 12:17:38.961220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.470 [2024-06-10 12:17:38.961229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.470 [2024-06-10 12:17:38.961397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.470 [2024-06-10 12:17:38.961572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.470 [2024-06-10 12:17:38.961582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.470 [2024-06-10 12:17:38.961591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.470 [2024-06-10 12:17:38.964169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.470 [2024-06-10 12:17:38.973657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.470 [2024-06-10 12:17:38.974023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.470 [2024-06-10 12:17:38.974054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.470 [2024-06-10 12:17:38.974064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.470 [2024-06-10 12:17:38.974228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.470 [2024-06-10 12:17:38.974393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.470 [2024-06-10 12:17:38.974402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.470 [2024-06-10 12:17:38.974411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.470 [2024-06-10 12:17:38.977012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:38.986644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:38.987071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:38.987088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:38.987100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:38.987268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:38.987438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:38.987448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:38.987457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:38.990021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:38.999395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:38.999870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:38.999922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:38.999954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.000461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.000631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.000641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.000650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.003167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.012138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:39.012582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:39.012634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:39.012666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.013253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.013639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.013650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.013658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.016179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.024796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:39.025221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:39.025237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:39.025246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.025401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.025581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.025594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.025603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.028123] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.037567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:39.038023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:39.038074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:39.038106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.038611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.038777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.038787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.038795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.041314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.050227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:39.050666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:39.050717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:39.050748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.051280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.051435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.051445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.051453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.053991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.062903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:39.063333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:39.063349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:39.063358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.063536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.063701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.063711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.063720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.066238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.075584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.730 [2024-06-10 12:17:39.076029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.730 [2024-06-10 12:17:39.076079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.730 [2024-06-10 12:17:39.076111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.730 [2024-06-10 12:17:39.076654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.730 [2024-06-10 12:17:39.076821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.730 [2024-06-10 12:17:39.076830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.730 [2024-06-10 12:17:39.076839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.730 [2024-06-10 12:17:39.079358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.730 [2024-06-10 12:17:39.088276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.088709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.088761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.088793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.089375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.089555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.089565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.089574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.092094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.101003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.101444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.101507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.101540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.101947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.102112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.102121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.102130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.104670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.113726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.114185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.114202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.114211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.114378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.114557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.114568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.114576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.117026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.126430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.126878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.126929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.126961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.127492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.127649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.127659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.127667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.130119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.139151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.139571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.139587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.139597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.139752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.139909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.139918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.139926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.142435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.151928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.152285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.152301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.152310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.152474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.152645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.152655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.152666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.155186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.164678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.165124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.165176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.165208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.165694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.165859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.165869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.165877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.168399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.177453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.177918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.177954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.177986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.178577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.178743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.178753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.178762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.181279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.190187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.190639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.190655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.190664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.190820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.190976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.190985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.190993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.193530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.202873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.203320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.203339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.203348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.731 [2024-06-10 12:17:39.203534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.731 [2024-06-10 12:17:39.203704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.731 [2024-06-10 12:17:39.203714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.731 [2024-06-10 12:17:39.203723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.731 [2024-06-10 12:17:39.206386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.731 [2024-06-10 12:17:39.215752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.731 [2024-06-10 12:17:39.216176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.731 [2024-06-10 12:17:39.216228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.731 [2024-06-10 12:17:39.216259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.732 [2024-06-10 12:17:39.216729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.732 [2024-06-10 12:17:39.216894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.732 [2024-06-10 12:17:39.216904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.732 [2024-06-10 12:17:39.216912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.732 [2024-06-10 12:17:39.219502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.732 [2024-06-10 12:17:39.228640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.732 [2024-06-10 12:17:39.229088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.732 [2024-06-10 12:17:39.229139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.732 [2024-06-10 12:17:39.229170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.732 [2024-06-10 12:17:39.229775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.732 [2024-06-10 12:17:39.230176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.732 [2024-06-10 12:17:39.230185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.732 [2024-06-10 12:17:39.230194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.732 [2024-06-10 12:17:39.232730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.732 [2024-06-10 12:17:39.241358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.732 [2024-06-10 12:17:39.241747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.732 [2024-06-10 12:17:39.241799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.732 [2024-06-10 12:17:39.241831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.732 [2024-06-10 12:17:39.242414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.732 [2024-06-10 12:17:39.242666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.732 [2024-06-10 12:17:39.242688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.732 [2024-06-10 12:17:39.242700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.732 [2024-06-10 12:17:39.246427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.991 [2024-06-10 12:17:39.254631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.991 [2024-06-10 12:17:39.255040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.991 [2024-06-10 12:17:39.255058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.991 [2024-06-10 12:17:39.255067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.991 [2024-06-10 12:17:39.255236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.991 [2024-06-10 12:17:39.255416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.991 [2024-06-10 12:17:39.255425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.991 [2024-06-10 12:17:39.255434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.991 [2024-06-10 12:17:39.257996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.991 [2024-06-10 12:17:39.267325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.991 [2024-06-10 12:17:39.267765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.991 [2024-06-10 12:17:39.267817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.991 [2024-06-10 12:17:39.267849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.991 [2024-06-10 12:17:39.268292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.991 [2024-06-10 12:17:39.268457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.991 [2024-06-10 12:17:39.268467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.991 [2024-06-10 12:17:39.268482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.991 [2024-06-10 12:17:39.271000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.991 [2024-06-10 12:17:39.280060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.991 [2024-06-10 12:17:39.280487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.991 [2024-06-10 12:17:39.280504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.991 [2024-06-10 12:17:39.280514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.991 [2024-06-10 12:17:39.280677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.991 [2024-06-10 12:17:39.280841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.991 [2024-06-10 12:17:39.280851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.991 [2024-06-10 12:17:39.280860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.991 [2024-06-10 12:17:39.283386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.991 [2024-06-10 12:17:39.292734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.991 [2024-06-10 12:17:39.293148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.991 [2024-06-10 12:17:39.293198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.991 [2024-06-10 12:17:39.293229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.991 [2024-06-10 12:17:39.293835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.991 [2024-06-10 12:17:39.294244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.991 [2024-06-10 12:17:39.294254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.991 [2024-06-10 12:17:39.294262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.991 [2024-06-10 12:17:39.296786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.991 [2024-06-10 12:17:39.305402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.991 [2024-06-10 12:17:39.305829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.991 [2024-06-10 12:17:39.305846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.991 [2024-06-10 12:17:39.305855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.991 [2024-06-10 12:17:39.306018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.991 [2024-06-10 12:17:39.306183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.991 [2024-06-10 12:17:39.306193] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.991 [2024-06-10 12:17:39.306201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.991 [2024-06-10 12:17:39.308727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.991 [2024-06-10 12:17:39.318071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.991 [2024-06-10 12:17:39.318498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.991 [2024-06-10 12:17:39.318515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.991 [2024-06-10 12:17:39.318524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.991 [2024-06-10 12:17:39.318688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.991 [2024-06-10 12:17:39.318853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.991 [2024-06-10 12:17:39.318863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.991 [2024-06-10 12:17:39.318871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.991 [2024-06-10 12:17:39.321395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.330784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.331197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.331214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.331226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.331389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.331560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.331571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.331579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.334098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.343445] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.343888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.343940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.343971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.344440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.344611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.344622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.344631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.347140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.356117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.356462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.356526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.356559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.357085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.357250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.357260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.357269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.359792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.368852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.369283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.369299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.369309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.369473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.369644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.369657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.369665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.372185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.381546] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.381977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.382029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.382060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.382573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.382739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.382748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.382757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.385276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.394229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.394574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.394590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.394599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.394756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.394912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.394921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.394929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.397443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.406933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.407345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.407396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.407428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.408031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.408482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.408492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.408501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.411019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.419727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.420153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.420170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.420180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.420344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.420515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.420526] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.420534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.423046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.432448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.432879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.432896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.432906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.433070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.433234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.433244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.433252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.435781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.445130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.445534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.992 [2024-06-10 12:17:39.445551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.992 [2024-06-10 12:17:39.445559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.992 [2024-06-10 12:17:39.445715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.992 [2024-06-10 12:17:39.445871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.992 [2024-06-10 12:17:39.445880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.992 [2024-06-10 12:17:39.445888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.992 [2024-06-10 12:17:39.448460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.992 [2024-06-10 12:17:39.458173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.992 [2024-06-10 12:17:39.458593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.993 [2024-06-10 12:17:39.458611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.993 [2024-06-10 12:17:39.458621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.993 [2024-06-10 12:17:39.458792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.993 [2024-06-10 12:17:39.458962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.993 [2024-06-10 12:17:39.458971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.993 [2024-06-10 12:17:39.458980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.993 [2024-06-10 12:17:39.461749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.993 [2024-06-10 12:17:39.471034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.993 [2024-06-10 12:17:39.471448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.993 [2024-06-10 12:17:39.471465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.993 [2024-06-10 12:17:39.471474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.993 [2024-06-10 12:17:39.471644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.993 [2024-06-10 12:17:39.471809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.993 [2024-06-10 12:17:39.471819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.993 [2024-06-10 12:17:39.471827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.993 [2024-06-10 12:17:39.474421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.993 [2024-06-10 12:17:39.483812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.993 [2024-06-10 12:17:39.484238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.993 [2024-06-10 12:17:39.484275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.993 [2024-06-10 12:17:39.484307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.993 [2024-06-10 12:17:39.484914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.993 [2024-06-10 12:17:39.485500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.993 [2024-06-10 12:17:39.485510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.993 [2024-06-10 12:17:39.485519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.993 [2024-06-10 12:17:39.488037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.993 [2024-06-10 12:17:39.496526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.993 [2024-06-10 12:17:39.496974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.993 [2024-06-10 12:17:39.496992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.993 [2024-06-10 12:17:39.497001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:49.993 [2024-06-10 12:17:39.497165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:49.993 [2024-06-10 12:17:39.497330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.993 [2024-06-10 12:17:39.497339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.993 [2024-06-10 12:17:39.497351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.993 [2024-06-10 12:17:39.499877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.993 [2024-06-10 12:17:39.509431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.993 [2024-06-10 12:17:39.509851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.993 [2024-06-10 12:17:39.509869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:49.993 [2024-06-10 12:17:39.509879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.510048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.510217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.510227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.510236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.512821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.522222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.522651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.522668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.522677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.522832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.522988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.522997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.523006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.525523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.535065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.535500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.535555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.535587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.535977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.536142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.536152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.536161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.538687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.547743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.548171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.548187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.548197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.548362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.548534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.548544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.548553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.551073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.560412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.560853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.560871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.560880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.561044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.561209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.561219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.561227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.563759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.573095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.573426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.573442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.573451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.573636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.573801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.573811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.573820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.576338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.585836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.586263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.586280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.586290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.586457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.586628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.586638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.586647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.589170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.598517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.253 [2024-06-10 12:17:39.598920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.253 [2024-06-10 12:17:39.598962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.253 [2024-06-10 12:17:39.598994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.253 [2024-06-10 12:17:39.599589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.253 [2024-06-10 12:17:39.599754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.253 [2024-06-10 12:17:39.599764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.253 [2024-06-10 12:17:39.599773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.253 [2024-06-10 12:17:39.602292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.253 [2024-06-10 12:17:39.611202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.611654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.611708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.611740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.612328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.612896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.612906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.612915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.615436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.624001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.624405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.624422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.624431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.624602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.624768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.624777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.624789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.627310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.636742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.637145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.637161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.637170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.637326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.637489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.637499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.637524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.640043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.649411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.649844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.649883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.649916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.650523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.651017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.651027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.651036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.653558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.662178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.662609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.662626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.662636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.662800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.662965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.662974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.662983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.665511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.674857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.675297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.675356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.675389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.675883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.676049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.676059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.676067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.678597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.687512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.687937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.687953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.687963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.688126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.688291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.688300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.688309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.690835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.700181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.700603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.700650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.700683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.701225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.701381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.701390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.701398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.703933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.712847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.713289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.713307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.713316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.713492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.713665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.713675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.713684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.716350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.725714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.726147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.726164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.726174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.254 [2024-06-10 12:17:39.726344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.254 [2024-06-10 12:17:39.726520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.254 [2024-06-10 12:17:39.726530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.254 [2024-06-10 12:17:39.726540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.254 [2024-06-10 12:17:39.729142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.254 [2024-06-10 12:17:39.738632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.254 [2024-06-10 12:17:39.739079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.254 [2024-06-10 12:17:39.739130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.254 [2024-06-10 12:17:39.739161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.255 [2024-06-10 12:17:39.739764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.255 [2024-06-10 12:17:39.740263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.255 [2024-06-10 12:17:39.740273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.255 [2024-06-10 12:17:39.740281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.255 [2024-06-10 12:17:39.742874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.255 [2024-06-10 12:17:39.751377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.255 [2024-06-10 12:17:39.751754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.255 [2024-06-10 12:17:39.751772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.255 [2024-06-10 12:17:39.751781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.255 [2024-06-10 12:17:39.751946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.255 [2024-06-10 12:17:39.752110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.255 [2024-06-10 12:17:39.752120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.255 [2024-06-10 12:17:39.752129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.255 [2024-06-10 12:17:39.754663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.255 [2024-06-10 12:17:39.764031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.255 [2024-06-10 12:17:39.764490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.255 [2024-06-10 12:17:39.764542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.255 [2024-06-10 12:17:39.764574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.255 [2024-06-10 12:17:39.765077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.255 [2024-06-10 12:17:39.765242] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.255 [2024-06-10 12:17:39.765252] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.255 [2024-06-10 12:17:39.765261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.255 [2024-06-10 12:17:39.767925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.515 [2024-06-10 12:17:39.776943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.515 [2024-06-10 12:17:39.777382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.515 [2024-06-10 12:17:39.777400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.515 [2024-06-10 12:17:39.777410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.515 [2024-06-10 12:17:39.777591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.515 [2024-06-10 12:17:39.777762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.515 [2024-06-10 12:17:39.777772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.515 [2024-06-10 12:17:39.777781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.515 [2024-06-10 12:17:39.780356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.515 [2024-06-10 12:17:39.789632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.515 [2024-06-10 12:17:39.790055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.515 [2024-06-10 12:17:39.790072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.515 [2024-06-10 12:17:39.790081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.515 [2024-06-10 12:17:39.790245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.515 [2024-06-10 12:17:39.790409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.515 [2024-06-10 12:17:39.790419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.515 [2024-06-10 12:17:39.790428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.515 [2024-06-10 12:17:39.792957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.515 [2024-06-10 12:17:39.802604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.515 [2024-06-10 12:17:39.803066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.515 [2024-06-10 12:17:39.803116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.515 [2024-06-10 12:17:39.803155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.515 [2024-06-10 12:17:39.803339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.515 [2024-06-10 12:17:39.803514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.515 [2024-06-10 12:17:39.803525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.515 [2024-06-10 12:17:39.803534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.515 [2024-06-10 12:17:39.806204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.515 [2024-06-10 12:17:39.815510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.515 [2024-06-10 12:17:39.815890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.515 [2024-06-10 12:17:39.815909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.515 [2024-06-10 12:17:39.815918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.515 [2024-06-10 12:17:39.816082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.515 [2024-06-10 12:17:39.816248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.515 [2024-06-10 12:17:39.816258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.515 [2024-06-10 12:17:39.816266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.515 [2024-06-10 12:17:39.818869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.515 [2024-06-10 12:17:39.828370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.515 [2024-06-10 12:17:39.828745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.515 [2024-06-10 12:17:39.828763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.515 [2024-06-10 12:17:39.828772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.515 [2024-06-10 12:17:39.828936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.515 [2024-06-10 12:17:39.829101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.515 [2024-06-10 12:17:39.829111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.515 [2024-06-10 12:17:39.829119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.831721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.841200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.841573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.841626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.841658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.842245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.842453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.842466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.842475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.845070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.853919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.854345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.854362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.854371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.854532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.854689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.854698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.854707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.857163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.866593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.867025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.867077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.867109] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.867645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.867802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.867812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.867820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.870272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.879264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.879716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.879768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.879800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.880388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.880834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.880845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.880854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.883356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.891910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.892248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.892264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.892273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.892429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.892590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.892600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.892608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.895064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.904626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.904979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.904996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.905005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.905161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.905317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.905326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.905335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.907796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.917344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.917764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.917816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.917848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.918435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.918916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.918931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.918943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.922680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.930542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.930939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.930955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.930964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.931122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.931278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.931288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.931296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.933754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.943311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.943746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.943798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.943831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.944415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.944577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.944586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.944594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.947048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.956028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.956466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.516 [2024-06-10 12:17:39.956489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.516 [2024-06-10 12:17:39.956499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.516 [2024-06-10 12:17:39.956654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.516 [2024-06-10 12:17:39.956809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.516 [2024-06-10 12:17:39.956819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.516 [2024-06-10 12:17:39.956827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.516 [2024-06-10 12:17:39.959281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.516 [2024-06-10 12:17:39.968698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.516 [2024-06-10 12:17:39.969051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.517 [2024-06-10 12:17:39.969068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.517 [2024-06-10 12:17:39.969077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.517 [2024-06-10 12:17:39.969240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.517 [2024-06-10 12:17:39.969406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.517 [2024-06-10 12:17:39.969416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.517 [2024-06-10 12:17:39.969428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.517 [2024-06-10 12:17:39.972113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.517 [2024-06-10 12:17:39.981500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.517 [2024-06-10 12:17:39.981823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.517 [2024-06-10 12:17:39.981840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.517 [2024-06-10 12:17:39.981850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.517 [2024-06-10 12:17:39.982013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.517 [2024-06-10 12:17:39.982177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.517 [2024-06-10 12:17:39.982187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.517 [2024-06-10 12:17:39.982196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.517 [2024-06-10 12:17:39.984790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.517 [2024-06-10 12:17:39.994394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.517 [2024-06-10 12:17:39.994857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.517 [2024-06-10 12:17:39.994909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.517 [2024-06-10 12:17:39.994942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.517 [2024-06-10 12:17:39.995543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.517 [2024-06-10 12:17:39.995767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.517 [2024-06-10 12:17:39.995777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.517 [2024-06-10 12:17:39.995786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.517 [2024-06-10 12:17:39.998450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.517 [2024-06-10 12:17:40.007406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.517 [2024-06-10 12:17:40.007713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.517 [2024-06-10 12:17:40.007731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.517 [2024-06-10 12:17:40.007741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.517 [2024-06-10 12:17:40.007910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.517 [2024-06-10 12:17:40.008080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.517 [2024-06-10 12:17:40.008090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.517 [2024-06-10 12:17:40.008099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.517 [2024-06-10 12:17:40.010767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.517 [2024-06-10 12:17:40.020353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.517 [2024-06-10 12:17:40.020654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.517 [2024-06-10 12:17:40.020672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.517 [2024-06-10 12:17:40.020681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.517 [2024-06-10 12:17:40.020849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.517 [2024-06-10 12:17:40.021019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.517 [2024-06-10 12:17:40.021030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.517 [2024-06-10 12:17:40.021039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.517 [2024-06-10 12:17:40.023705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.776 [2024-06-10 12:17:40.034307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.776 [2024-06-10 12:17:40.034753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.776 [2024-06-10 12:17:40.034771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.776 [2024-06-10 12:17:40.034781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.776 [2024-06-10 12:17:40.034952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.776 [2024-06-10 12:17:40.035123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.776 [2024-06-10 12:17:40.035134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.776 [2024-06-10 12:17:40.035143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.776 [2024-06-10 12:17:40.037775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.776 [2024-06-10 12:17:40.047200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.776 [2024-06-10 12:17:40.047618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.776 [2024-06-10 12:17:40.047636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.776 [2024-06-10 12:17:40.047647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.776 [2024-06-10 12:17:40.047812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.776 [2024-06-10 12:17:40.047977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.047987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.047996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.050597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.060077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.060443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.060460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.060470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.060642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.060808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.060819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.060827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.063443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.072930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.073342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.073359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.073369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.073541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.073708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.073718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.073727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.076319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.085830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.086180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.086197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.086207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.086372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.086545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.086556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.086565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.089156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.098657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.099047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.099064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.099074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.099249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.099416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.099426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.099435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.102032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.111385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.111675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.111692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.111701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.111856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.112013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.112023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.112031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.114488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.124039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.124374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.124390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.124399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.124559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.124715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.124725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.124733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.127185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.136743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.137089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.137139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.137171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.137772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.138321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.138330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.138339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.140801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.149506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.149852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.149871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.149880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.150035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.150192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.150201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.150209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.152674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.162235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.162634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.162651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.162660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.162816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.162973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.162982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.162990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.165447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.777 [2024-06-10 12:17:40.174955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.777 [2024-06-10 12:17:40.175307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.777 [2024-06-10 12:17:40.175324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.777 [2024-06-10 12:17:40.175333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.777 [2024-06-10 12:17:40.175495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.777 [2024-06-10 12:17:40.175651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.777 [2024-06-10 12:17:40.175661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.777 [2024-06-10 12:17:40.175669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.777 [2024-06-10 12:17:40.178126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.187711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.188102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.188118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.188127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.188283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.188443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.188452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.188461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.191001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.200423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.200797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.200849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.200881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.201277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.201434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.201443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.201451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.203994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.213131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.213564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.213581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.213590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.213745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.213901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.213911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.213919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.216368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.225860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.226300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.226317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.226326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.226495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.226661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.226671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.226680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.229345] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.238796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.239156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.239207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.239240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.239843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.240436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.240446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.240455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.243053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.251562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.251858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.251874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.251884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.252049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.252212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.252223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.252231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.254715] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.264284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.264668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.264685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.264694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.264851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.265007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.265017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.265025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.267487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.277055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.277513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.277530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.277542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.277719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.277884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.277894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.277903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.280421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.778 [2024-06-10 12:17:40.290043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.778 [2024-06-10 12:17:40.290469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.778 [2024-06-10 12:17:40.290496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:50.778 [2024-06-10 12:17:40.290507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:50.778 [2024-06-10 12:17:40.290677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:50.778 [2024-06-10 12:17:40.290847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.778 [2024-06-10 12:17:40.290858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.778 [2024-06-10 12:17:40.290867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.778 [2024-06-10 12:17:40.293536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.302971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.303337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.303353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.038 [2024-06-10 12:17:40.303363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.038 [2024-06-10 12:17:40.303538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.038 [2024-06-10 12:17:40.303708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.038 [2024-06-10 12:17:40.303719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.038 [2024-06-10 12:17:40.303728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.038 [2024-06-10 12:17:40.306394] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.315835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.316257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.316275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.038 [2024-06-10 12:17:40.316284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.038 [2024-06-10 12:17:40.316453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.038 [2024-06-10 12:17:40.316628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.038 [2024-06-10 12:17:40.316642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.038 [2024-06-10 12:17:40.316651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.038 [2024-06-10 12:17:40.319316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.328755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.329078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.329095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.038 [2024-06-10 12:17:40.329105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.038 [2024-06-10 12:17:40.329274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.038 [2024-06-10 12:17:40.329444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.038 [2024-06-10 12:17:40.329453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.038 [2024-06-10 12:17:40.329462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.038 [2024-06-10 12:17:40.332131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.341729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.342168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.342185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.038 [2024-06-10 12:17:40.342195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.038 [2024-06-10 12:17:40.342364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.038 [2024-06-10 12:17:40.342540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.038 [2024-06-10 12:17:40.342551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.038 [2024-06-10 12:17:40.342560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.038 [2024-06-10 12:17:40.345227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.354579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.355020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.355070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.038 [2024-06-10 12:17:40.355102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.038 [2024-06-10 12:17:40.355706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.038 [2024-06-10 12:17:40.355980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.038 [2024-06-10 12:17:40.355990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.038 [2024-06-10 12:17:40.355998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.038 [2024-06-10 12:17:40.358451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.367283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.367712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.367728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.038 [2024-06-10 12:17:40.367737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.038 [2024-06-10 12:17:40.367892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.038 [2024-06-10 12:17:40.368049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.038 [2024-06-10 12:17:40.368058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.038 [2024-06-10 12:17:40.368067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.038 [2024-06-10 12:17:40.370608] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.038 [2024-06-10 12:17:40.380025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.038 [2024-06-10 12:17:40.380379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.038 [2024-06-10 12:17:40.380395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.380405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.380567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.380725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.380734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.380742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.383197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.392754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.393215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.393267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.393299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.393840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.393997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.394006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.394014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.397472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.406100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.406431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.406447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.406456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.406644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.406809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.406819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.406828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.409329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.418994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.419446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.419511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.419544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.419885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.420050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.420059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.420068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.422560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.431678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.432054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.432106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.432138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.432693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.432851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.432860] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.432868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.435321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.444441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.444895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.444912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.444922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.445086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.445251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.445261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.445272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.447867] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.457184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.457616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.457633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.457642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.457797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.457953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.457962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.457971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.460434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.469920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.470344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.470360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.470369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.470529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.470686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.470695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.470703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.473157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.482684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.483112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.483129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.483138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.483303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.483468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.483484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.483493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.486161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.495528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.495904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.495955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.495987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.496587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.497175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.497185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.039 [2024-06-10 12:17:40.497194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.039 [2024-06-10 12:17:40.499786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.039 [2024-06-10 12:17:40.508307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.039 [2024-06-10 12:17:40.508737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.039 [2024-06-10 12:17:40.508754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.039 [2024-06-10 12:17:40.508763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.039 [2024-06-10 12:17:40.508927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.039 [2024-06-10 12:17:40.509092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.039 [2024-06-10 12:17:40.509101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.040 [2024-06-10 12:17:40.509110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.040 [2024-06-10 12:17:40.511706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.040 [2024-06-10 12:17:40.521071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.040 [2024-06-10 12:17:40.521497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.040 [2024-06-10 12:17:40.521539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.040 [2024-06-10 12:17:40.521571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.040 [2024-06-10 12:17:40.522095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.040 [2024-06-10 12:17:40.522260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.040 [2024-06-10 12:17:40.522270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.040 [2024-06-10 12:17:40.522278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.040 [2024-06-10 12:17:40.524790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.040 [2024-06-10 12:17:40.533963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.040 [2024-06-10 12:17:40.534319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.040 [2024-06-10 12:17:40.534336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.040 [2024-06-10 12:17:40.534345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.040 [2024-06-10 12:17:40.534516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.040 [2024-06-10 12:17:40.534677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.040 [2024-06-10 12:17:40.534687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.040 [2024-06-10 12:17:40.534695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.040 [2024-06-10 12:17:40.537145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.040 [2024-06-10 12:17:40.546694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.040 [2024-06-10 12:17:40.547102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.040 [2024-06-10 12:17:40.547144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.040 [2024-06-10 12:17:40.547177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.040 [2024-06-10 12:17:40.547782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.040 [2024-06-10 12:17:40.548265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.040 [2024-06-10 12:17:40.548274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.040 [2024-06-10 12:17:40.548282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.040 [2024-06-10 12:17:40.550735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.298 [2024-06-10 12:17:40.559538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.298 [2024-06-10 12:17:40.559969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-06-10 12:17:40.560020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.298 [2024-06-10 12:17:40.560052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.298 [2024-06-10 12:17:40.560464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.298 [2024-06-10 12:17:40.560650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.298 [2024-06-10 12:17:40.560660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.298 [2024-06-10 12:17:40.560669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.298 [2024-06-10 12:17:40.563351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.298 [2024-06-10 12:17:40.572204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.298 [2024-06-10 12:17:40.572630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-06-10 12:17:40.572647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.298 [2024-06-10 12:17:40.572656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.298 [2024-06-10 12:17:40.572812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.298 [2024-06-10 12:17:40.572968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.298 [2024-06-10 12:17:40.572977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.298 [2024-06-10 12:17:40.572985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.298 [2024-06-10 12:17:40.575442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.298 [2024-06-10 12:17:40.584934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.298 [2024-06-10 12:17:40.585365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.585381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.585390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.585552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.585708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.585718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.585726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.588179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.597583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.598013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.598030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.598039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.598194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.598349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.598359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.598367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.600824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.610227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.610652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.610669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.610678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.610833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.610989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.610998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.611006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.613464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.622863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.623292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.623342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.623381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.623989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.624213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.624223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.624231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.626711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.635538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.635963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.635979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.635988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.636143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.636299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.636309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.636317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.638774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.648175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.648602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.648618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.648627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.648784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.648940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.648949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.648957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.651409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.660896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.661324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.661340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.661349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.661511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.661670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.661680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.661688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.664140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.673541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.674008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.674057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.674089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.674631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.674796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.674805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.674814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.677331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.686247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.686569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.686615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.686648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.687236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.687436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.687446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.687454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.691016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.699484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.699933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.699984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.700015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.299 [2024-06-10 12:17:40.700425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.299 [2024-06-10 12:17:40.700588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.299 [2024-06-10 12:17:40.700598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.299 [2024-06-10 12:17:40.700606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.299 [2024-06-10 12:17:40.703054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.299 [2024-06-10 12:17:40.712165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.299 [2024-06-10 12:17:40.712612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-06-10 12:17:40.712664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.299 [2024-06-10 12:17:40.712696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.713282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.713490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.713499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.713508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.715960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.724929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.725329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.725346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.725355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.725516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.725673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.725683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.725691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.728140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.737688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.738133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.738151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.738160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.738323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.738495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.738505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.738513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.741174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.750465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.750918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.750935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.750947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.751110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.751274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.751284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.751293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.753814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.763101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.763549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.763566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.763575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.763739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.763903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.763913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.763922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.766443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.775780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.776205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.776237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.776270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.776827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.776983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.776992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.777001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.779506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.788465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.788892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.788908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.788917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.789073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.789229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.789241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.789249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.791705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.801102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.801454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.801470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.801485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.801664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.801829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.801838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.801847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.300 [2024-06-10 12:17:40.804347] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.300 [2024-06-10 12:17:40.813815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.300 [2024-06-10 12:17:40.814262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-06-10 12:17:40.814281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.300 [2024-06-10 12:17:40.814291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.300 [2024-06-10 12:17:40.814461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.300 [2024-06-10 12:17:40.814637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.300 [2024-06-10 12:17:40.814648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.300 [2024-06-10 12:17:40.814657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.559 [2024-06-10 12:17:40.817339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.559 [2024-06-10 12:17:40.826526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.559 [2024-06-10 12:17:40.826993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-06-10 12:17:40.827044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.559 [2024-06-10 12:17:40.827076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.559 [2024-06-10 12:17:40.827681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.559 [2024-06-10 12:17:40.828073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.559 [2024-06-10 12:17:40.828082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.559 [2024-06-10 12:17:40.828091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.559 [2024-06-10 12:17:40.830547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.559 [2024-06-10 12:17:40.839314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.559 [2024-06-10 12:17:40.839725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-06-10 12:17:40.839741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.559 [2024-06-10 12:17:40.839750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.559 [2024-06-10 12:17:40.839906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.840062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.840071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.840079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.842558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.852032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.852434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.852473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.852519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.853040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.853204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.853214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.853223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.855702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.864768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.865123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.865139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.865147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.865302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.865458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.865467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.865481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.868064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.877474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.877879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.877895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.877905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.878063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.878220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.878229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.878237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.880700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.890246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.890578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.890594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.890603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.890759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.890915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.890924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.890932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.893388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.903018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.903419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.903435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.903444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.903627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.903793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.903803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.903812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.906313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.915716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.916077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.916093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.916101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.916257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.916413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.916423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.916434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.919019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.928421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.928848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.928865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.928874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.929038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.929202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.929212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.929221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.931707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.941107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.941452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.941514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.941547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.942060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.942217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.942226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.942234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.944687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.953798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.954237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.954287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.954319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.954923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.955234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.955244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.560 [2024-06-10 12:17:40.955252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.560 [2024-06-10 12:17:40.957738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.560 [2024-06-10 12:17:40.966565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.560 [2024-06-10 12:17:40.966989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-06-10 12:17:40.967008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.560 [2024-06-10 12:17:40.967017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.560 [2024-06-10 12:17:40.967172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.560 [2024-06-10 12:17:40.967329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.560 [2024-06-10 12:17:40.967339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:40.967347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:40.969800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:40.979198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:40.979625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:40.979642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:40.979651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:40.979806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:40.979962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:40.979971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:40.979980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:40.982439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:40.991927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:40.992350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:40.992367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:40.992376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:40.992546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:40.992713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:40.992722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:40.992731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:40.995388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:41.004749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:41.005204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:41.005255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:41.005287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:41.005889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:41.006364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:41.006374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:41.006383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:41.008973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:41.017427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:41.017893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:41.017910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:41.017920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:41.018085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:41.018250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:41.018259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:41.018268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:41.020753] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:41.030152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:41.030582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:41.030598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:41.030607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:41.030762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:41.030918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:41.030927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:41.030936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:41.033392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:41.042796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:41.043243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:41.043293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:41.043325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:41.043928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:41.044418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:41.044427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:41.044436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:41.046958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:41.055498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:41.055937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:41.055988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:41.056020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:41.056433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:41.056596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:41.056606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:41.056614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:41.059065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.561 [2024-06-10 12:17:41.068178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.561 [2024-06-10 12:17:41.068545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-06-10 12:17:41.068598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.561 [2024-06-10 12:17:41.068631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.561 [2024-06-10 12:17:41.069217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.561 [2024-06-10 12:17:41.069594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.561 [2024-06-10 12:17:41.069604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.561 [2024-06-10 12:17:41.069612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.561 [2024-06-10 12:17:41.072063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.820 [2024-06-10 12:17:41.081008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.820 [2024-06-10 12:17:41.081446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.820 [2024-06-10 12:17:41.081510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.820 [2024-06-10 12:17:41.081543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.820 [2024-06-10 12:17:41.082130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.820 [2024-06-10 12:17:41.082502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.820 [2024-06-10 12:17:41.082513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.820 [2024-06-10 12:17:41.082522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.820 [2024-06-10 12:17:41.085173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.820 [2024-06-10 12:17:41.093701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.820 [2024-06-10 12:17:41.094055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.820 [2024-06-10 12:17:41.094071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.820 [2024-06-10 12:17:41.094082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.820 [2024-06-10 12:17:41.094238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.820 [2024-06-10 12:17:41.094394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.820 [2024-06-10 12:17:41.094403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.820 [2024-06-10 12:17:41.094411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.820 [2024-06-10 12:17:41.096950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.820 [2024-06-10 12:17:41.106345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.820 [2024-06-10 12:17:41.106747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.820 [2024-06-10 12:17:41.106763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.820 [2024-06-10 12:17:41.106773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.820 [2024-06-10 12:17:41.106928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.820 [2024-06-10 12:17:41.107084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.820 [2024-06-10 12:17:41.107093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.820 [2024-06-10 12:17:41.107101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.820 [2024-06-10 12:17:41.109561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.820 [2024-06-10 12:17:41.119277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.820 [2024-06-10 12:17:41.119734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.820 [2024-06-10 12:17:41.119786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.820 [2024-06-10 12:17:41.119818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.120382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.120557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.120567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.120576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.123238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.132205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.132506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.132524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.132534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.132703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.132873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.132885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.132894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.135564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.144974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.145343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.145393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.145425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.146030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.146632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.146642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.146651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.149211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.157749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.158172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.158188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.158197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.158353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.158514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.158524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.158533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.160984] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.170387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.170723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.170774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.170806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.171392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.171855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.171865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.171874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.174382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.183089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.183507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.183561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.183594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.183987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.184144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.184153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.184161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.186616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.195732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.196159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.196176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.196185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.196340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.196503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.196513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.196521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.198976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.208388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.208815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.821 [2024-06-10 12:17:41.208832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.821 [2024-06-10 12:17:41.208841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.821 [2024-06-10 12:17:41.208996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.821 [2024-06-10 12:17:41.209152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.821 [2024-06-10 12:17:41.209162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.821 [2024-06-10 12:17:41.209170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.821 [2024-06-10 12:17:41.211628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.821 [2024-06-10 12:17:41.221032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.821 [2024-06-10 12:17:41.221352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.221368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.221380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.221540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.221696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.221706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.221714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.224165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.233712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.234060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.234076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.234086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.234241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.234397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.234407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.234415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.236965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.246371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.246824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.246842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.246852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.247016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.247180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.247190] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.247198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.249865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.259224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.259587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.259604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.259614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.259777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.259942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.259952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.259964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.262553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.272016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.272460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.272524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.272557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.272975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.273131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.273141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.273149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.275607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.284728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.285175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.285226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.285258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.285753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.285919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.285929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.285937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.288483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.297447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.297897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.297915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.297924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.298087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.298252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.298261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.298270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.300755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.310154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.822 [2024-06-10 12:17:41.310596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.822 [2024-06-10 12:17:41.310647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.822 [2024-06-10 12:17:41.310678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.822 [2024-06-10 12:17:41.311092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.822 [2024-06-10 12:17:41.311247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.822 [2024-06-10 12:17:41.311257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.822 [2024-06-10 12:17:41.311265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.822 [2024-06-10 12:17:41.313721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.822 [2024-06-10 12:17:41.322835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.823 [2024-06-10 12:17:41.323255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.823 [2024-06-10 12:17:41.323271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.823 [2024-06-10 12:17:41.323280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.823 [2024-06-10 12:17:41.323436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.823 [2024-06-10 12:17:41.323597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.823 [2024-06-10 12:17:41.323606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.823 [2024-06-10 12:17:41.323615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.823 [2024-06-10 12:17:41.326064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.823 [2024-06-10 12:17:41.335760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.823 [2024-06-10 12:17:41.336200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.823 [2024-06-10 12:17:41.336218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:51.823 [2024-06-10 12:17:41.336228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:51.823 [2024-06-10 12:17:41.336396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:51.823 [2024-06-10 12:17:41.336570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.823 [2024-06-10 12:17:41.336580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.823 [2024-06-10 12:17:41.336589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.339249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.348678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.349117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.349135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.349144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.349316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.349490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.349501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.349510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.352174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.361608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.362047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.362064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.362074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.362242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.362411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.362421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.362430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.365098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.374598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.375044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.375061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.375071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.375241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.375410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.375420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.375429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.378094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.387537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.387986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.388004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.388014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.388183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.388352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.388362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.388374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.391036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.400465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.400801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.400819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.400828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.400997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.401166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.401176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.401185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.403853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.413472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.413850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.413867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.413877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.414045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.414214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.414224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.414233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.416899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 [2024-06-10 12:17:41.426496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.426917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.426934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.426944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.427112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.083 [2024-06-10 12:17:41.427281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.083 [2024-06-10 12:17:41.427291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.083 [2024-06-10 12:17:41.427300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.083 [2024-06-10 12:17:41.429964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2377364 Killed "${NVMF_APP[@]}" "$@" 00:28:52.083 12:17:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:52.083 12:17:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:52.083 12:17:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.083 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:52.083 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.083 [2024-06-10 12:17:41.439473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.083 [2024-06-10 12:17:41.439965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.083 [2024-06-10 12:17:41.439984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.083 [2024-06-10 12:17:41.439994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.083 [2024-06-10 12:17:41.440164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.440334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.440344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.440353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2378760 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2378760 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2378760 ']' 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:52.084 12:17:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.084 [2024-06-10 12:17:41.443027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.452492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.452937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.452956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.452966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.453135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.453304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.453314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.453323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.455994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.465447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.465875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.465893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.465907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.466076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.466246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.466256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.466265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.468934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.478380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.478806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.478824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.478834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.479002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.479172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.479183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.479192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.481874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.487170] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:28:52.084 [2024-06-10 12:17:41.487222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.084 [2024-06-10 12:17:41.491316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.491679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.491697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.491707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.491877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.492047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.492057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.492067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.494741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.504340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.504908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.504929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.504942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.505112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.505281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.505291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.505301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.507972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.517260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.517705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.517723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.517733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.517902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.518073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.518083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.518092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.520768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.084 [2024-06-10 12:17:41.530215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.530662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.530682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.530691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.530861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.531031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.531041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.531050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.533929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.543225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.543573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.543592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.543602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.084 [2024-06-10 12:17:41.543772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.084 [2024-06-10 12:17:41.543942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.084 [2024-06-10 12:17:41.543956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.084 [2024-06-10 12:17:41.543966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.084 [2024-06-10 12:17:41.546637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.084 [2024-06-10 12:17:41.556093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.084 [2024-06-10 12:17:41.556464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.084 [2024-06-10 12:17:41.556489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.084 [2024-06-10 12:17:41.556500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.085 [2024-06-10 12:17:41.556670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.085 [2024-06-10 12:17:41.556839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.085 [2024-06-10 12:17:41.556849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.085 [2024-06-10 12:17:41.556859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.085 [2024-06-10 12:17:41.559526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.085 [2024-06-10 12:17:41.562873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:52.085 [2024-06-10 12:17:41.568986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.085 [2024-06-10 12:17:41.569435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.085 [2024-06-10 12:17:41.569453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.085 [2024-06-10 12:17:41.569463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.085 [2024-06-10 12:17:41.569643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.085 [2024-06-10 12:17:41.569815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.085 [2024-06-10 12:17:41.569825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.085 [2024-06-10 12:17:41.569835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.085 [2024-06-10 12:17:41.572506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.085 [2024-06-10 12:17:41.581967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.085 [2024-06-10 12:17:41.582393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.085 [2024-06-10 12:17:41.582411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.085 [2024-06-10 12:17:41.582421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.085 [2024-06-10 12:17:41.582597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.085 [2024-06-10 12:17:41.582767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.085 [2024-06-10 12:17:41.582778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.085 [2024-06-10 12:17:41.582787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.085 [2024-06-10 12:17:41.585450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.085 [2024-06-10 12:17:41.594871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.085 [2024-06-10 12:17:41.595309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.085 [2024-06-10 12:17:41.595327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.085 [2024-06-10 12:17:41.595336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.085 [2024-06-10 12:17:41.595507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.085 [2024-06-10 12:17:41.595673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.085 [2024-06-10 12:17:41.595684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.085 [2024-06-10 12:17:41.595693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.085 [2024-06-10 12:17:41.598334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.607801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.608179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.608211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.608222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.608389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.344 [2024-06-10 12:17:41.608558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.344 [2024-06-10 12:17:41.608568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.344 [2024-06-10 12:17:41.608578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.344 [2024-06-10 12:17:41.611175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.620703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.621055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.621072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.621082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.621247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.344 [2024-06-10 12:17:41.621413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.344 [2024-06-10 12:17:41.621423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.344 [2024-06-10 12:17:41.621433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.344 [2024-06-10 12:17:41.624030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.633523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.633823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.633841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.633856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.634022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.344 [2024-06-10 12:17:41.634134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.344 [2024-06-10 12:17:41.634163] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.344 [2024-06-10 12:17:41.634173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.344 [2024-06-10 12:17:41.634181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.344 [2024-06-10 12:17:41.634187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error [2024-06-10 12:17:41.634188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.344 state 00:28:52.344 [2024-06-10 12:17:41.634198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.344 [2024-06-10 12:17:41.634208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.344 [2024-06-10 12:17:41.634324] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.344 [2024-06-10 12:17:41.634395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.344 [2024-06-10 12:17:41.634463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.344 [2024-06-10 12:17:41.636892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.646521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.646900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.646921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.646932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.647101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.344 [2024-06-10 12:17:41.647271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.344 [2024-06-10 12:17:41.647282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.344 [2024-06-10 12:17:41.647292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.344 [2024-06-10 12:17:41.649969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.659424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.659752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.659775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.659785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.659956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.344 [2024-06-10 12:17:41.660126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.344 [2024-06-10 12:17:41.660136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.344 [2024-06-10 12:17:41.660145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.344 [2024-06-10 12:17:41.662824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.672440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.672929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.672950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.672960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.673131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.344 [2024-06-10 12:17:41.673302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.344 [2024-06-10 12:17:41.673313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.344 [2024-06-10 12:17:41.673322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.344 [2024-06-10 12:17:41.675994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.344 [2024-06-10 12:17:41.685457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.344 [2024-06-10 12:17:41.685956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.344 [2024-06-10 12:17:41.685980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.344 [2024-06-10 12:17:41.685990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.344 [2024-06-10 12:17:41.686161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.686331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.686342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.686351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.689022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.698482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.698959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.698978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.698989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.699158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.699328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.699338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.699348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.702022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.711462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.711774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.711793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.711807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.711977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.712147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.712157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.712166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.714840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.724399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.724703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.724720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.724730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.724899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.725068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.725078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.725087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.727760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.737356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.737679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.737697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.737706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.737875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.738045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.738056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.738065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.740738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.750340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.750806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.750825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.750834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.751004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.751173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.751187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.751196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.753866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.763304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.763656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.763674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.763684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.763854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.764023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.764034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.764043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.766717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.776318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.776677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.776695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.776705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.776875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.777045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.777055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.777064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.779738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.789191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.789628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.789647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.789657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.789828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.789997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.790007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.790016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.792686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.802137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.802565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.802583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.802593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.802761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.802931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.802942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.802951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.345 [2024-06-10 12:17:41.805616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.345 [2024-06-10 12:17:41.815064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.345 [2024-06-10 12:17:41.815454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.345 [2024-06-10 12:17:41.815472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.345 [2024-06-10 12:17:41.815487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.345 [2024-06-10 12:17:41.815655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.345 [2024-06-10 12:17:41.815825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.345 [2024-06-10 12:17:41.815835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.345 [2024-06-10 12:17:41.815844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.346 [2024-06-10 12:17:41.818514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.346 [2024-06-10 12:17:41.827932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.346 [2024-06-10 12:17:41.828399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.346 [2024-06-10 12:17:41.828416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.346 [2024-06-10 12:17:41.828426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.346 [2024-06-10 12:17:41.828599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.346 [2024-06-10 12:17:41.828769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.346 [2024-06-10 12:17:41.828780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.346 [2024-06-10 12:17:41.828789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.346 [2024-06-10 12:17:41.831450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.346 [2024-06-10 12:17:41.840900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.346 [2024-06-10 12:17:41.841326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.346 [2024-06-10 12:17:41.841344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.346 [2024-06-10 12:17:41.841353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.346 [2024-06-10 12:17:41.841533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.346 [2024-06-10 12:17:41.841704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.346 [2024-06-10 12:17:41.841715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.346 [2024-06-10 12:17:41.841724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.346 [2024-06-10 12:17:41.844389] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.346 [2024-06-10 12:17:41.853846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.346 [2024-06-10 12:17:41.854199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.346 [2024-06-10 12:17:41.854217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.346 [2024-06-10 12:17:41.854227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.346 [2024-06-10 12:17:41.854398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.346 [2024-06-10 12:17:41.854574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.346 [2024-06-10 12:17:41.854585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.346 [2024-06-10 12:17:41.854593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.346 [2024-06-10 12:17:41.857257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.866722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.867081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.867098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.867108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.867277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.867446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.867458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.867467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.870137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.879617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.880057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.880076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.880086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.880255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.880424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.880435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.880448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.883130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.892575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.892950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.892968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.892977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.893147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.893317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.893327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.893336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.896009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.905490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.905935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.905953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.905963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.906133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.906302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.906312] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.906321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.908991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.918376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.918701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.918719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.918729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.918898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.919067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.919077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.919086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.921758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.931358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.931721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.931738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.931748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.931918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.932086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.932097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.932106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.934777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.944367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.944669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.944687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.944697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.944866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.945035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.945046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.945055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.947724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.957330] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.957685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.957703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.605 [2024-06-10 12:17:41.957712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.605 [2024-06-10 12:17:41.957881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.605 [2024-06-10 12:17:41.958051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.605 [2024-06-10 12:17:41.958061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.605 [2024-06-10 12:17:41.958070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.605 [2024-06-10 12:17:41.960740] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.605 [2024-06-10 12:17:41.970331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.605 [2024-06-10 12:17:41.970778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.605 [2024-06-10 12:17:41.970796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:41.970806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:41.970976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:41.971149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:41.971159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:41.971168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:41.973834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:41.983271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:41.983693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:41.983710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:41.983720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:41.983890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:41.984060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:41.984070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:41.984079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:41.986749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:41.996177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:41.996594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:41.996612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:41.996622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:41.996792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:41.996962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:41.996972] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:41.996981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:41.999665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.009096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.009509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.009527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.009537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.009706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.009876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.009887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.009896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.012564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.021993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.022375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.022392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.022401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.022578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.022749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.022759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.022768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.025430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.034875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.035294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.035311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.035320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.035494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.035664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.035675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.035684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.038339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.047772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.048199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.048216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.048226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.048394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.048569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.048580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.048589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.051255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.060702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.061119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.061137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.061149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.061318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.061492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.061502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.061511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.064179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.073608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.073945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.073962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.073972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.074141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.074314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.074324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.074334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.077005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.086600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.087002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.087019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.087029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.087198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.087367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.606 [2024-06-10 12:17:42.087377] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.606 [2024-06-10 12:17:42.087386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.606 [2024-06-10 12:17:42.090054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.606 [2024-06-10 12:17:42.099502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.606 [2024-06-10 12:17:42.099925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.606 [2024-06-10 12:17:42.099942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.606 [2024-06-10 12:17:42.099952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.606 [2024-06-10 12:17:42.100121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.606 [2024-06-10 12:17:42.100294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.607 [2024-06-10 12:17:42.100305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.607 [2024-06-10 12:17:42.100314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.607 [2024-06-10 12:17:42.102983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.607 [2024-06-10 12:17:42.112421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.607 [2024-06-10 12:17:42.112866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.607 [2024-06-10 12:17:42.112884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.607 [2024-06-10 12:17:42.112894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.607 [2024-06-10 12:17:42.113063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.607 [2024-06-10 12:17:42.113233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.607 [2024-06-10 12:17:42.113244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.607 [2024-06-10 12:17:42.113253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.607 [2024-06-10 12:17:42.115933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.125373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.125819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.125839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.125850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.126020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.126190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.126200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.126209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.128879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.138321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.138742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.138760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.138770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.138940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.139109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.139120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.139129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.141797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.151232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.151575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.151593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.151603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.151771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.151940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.151950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.151959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.154629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.164237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.164686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.164705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.164715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.164884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.165054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.165065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.165073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.167734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.177178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.177620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.177638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.177648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.177816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.177985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.177995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.178004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.180674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.190113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.190539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.190557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.190570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.190740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.190909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.190920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.190929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.193582] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.203018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.203393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.203411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.203421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.866 [2024-06-10 12:17:42.203594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.866 [2024-06-10 12:17:42.203764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.866 [2024-06-10 12:17:42.203775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.866 [2024-06-10 12:17:42.203784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.866 [2024-06-10 12:17:42.206451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.866 [2024-06-10 12:17:42.215898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.866 [2024-06-10 12:17:42.216325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.866 [2024-06-10 12:17:42.216345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.866 [2024-06-10 12:17:42.216355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.216533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.216705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.216716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.216724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.219393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.228841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.229214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.229231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.229242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.229411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.229586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.229601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.229611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.232278] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.241713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.242161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.242179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.242189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.242358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.242535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.242547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.242556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.245215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.254656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.255067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.255085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.255095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.255264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.255434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.255445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.255454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.258121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.267566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.267941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.267960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.267969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.268139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.268307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.268318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.268327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.270998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.280431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.280802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.280819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.280829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.280999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.281169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.281180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.281190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.283863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:52.867 [2024-06-10 12:17:42.293300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.867 [2024-06-10 12:17:42.293746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.293765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.293775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.293945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.294116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.294127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.294136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.296810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.306254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.306694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.306713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.306723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.306894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.307067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.307078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.307087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.309758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.319220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.319641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.319659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.319668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.319837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.320007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.320017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.320026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.322696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 [2024-06-10 12:17:42.332144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.332545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.332564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.332573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.332743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.332912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.332922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.867 [2024-06-10 12:17:42.332931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.867 [2024-06-10 12:17:42.335602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.867 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.867 [2024-06-10 12:17:42.342304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.867 [2024-06-10 12:17:42.345042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.867 [2024-06-10 12:17:42.345487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.867 [2024-06-10 12:17:42.345506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.867 [2024-06-10 12:17:42.345516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.867 [2024-06-10 12:17:42.345685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.867 [2024-06-10 12:17:42.345854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.867 [2024-06-10 12:17:42.345865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.868 [2024-06-10 12:17:42.345874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.868 [2024-06-10 12:17:42.348626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.868 [2024-06-10 12:17:42.357919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.868 [2024-06-10 12:17:42.358358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.868 [2024-06-10 12:17:42.358376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.868 [2024-06-10 12:17:42.358387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.868 [2024-06-10 12:17:42.358565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.868 [2024-06-10 12:17:42.358736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.868 [2024-06-10 12:17:42.358747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.868 [2024-06-10 12:17:42.358757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.868 [2024-06-10 12:17:42.361418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.868 [2024-06-10 12:17:42.370868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.868 [2024-06-10 12:17:42.371243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.868 [2024-06-10 12:17:42.371261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.868 [2024-06-10 12:17:42.371271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.868 [2024-06-10 12:17:42.371441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.868 [2024-06-10 12:17:42.371620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.868 [2024-06-10 12:17:42.371631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.868 [2024-06-10 12:17:42.371641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.868 [2024-06-10 12:17:42.374302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.868 Malloc0 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.868 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.868 [2024-06-10 12:17:42.383853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.868 [2024-06-10 12:17:42.384289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.868 [2024-06-10 12:17:42.384307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:52.868 [2024-06-10 12:17:42.384318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:52.868 [2024-06-10 12:17:42.384495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:52.868 [2024-06-10 12:17:42.384668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.868 [2024-06-10 12:17:42.384679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.868 [2024-06-10 12:17:42.384692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:53.126 [2024-06-10 12:17:42.387370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:53.126 [2024-06-10 12:17:42.396919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:53.126 [2024-06-10 12:17:42.397362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.126 [2024-06-10 12:17:42.397382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1643820 with addr=10.0.0.2, port=4420 00:28:53.126 [2024-06-10 12:17:42.397392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643820 is same with the state(5) to be set 00:28:53.126 [2024-06-10 12:17:42.397571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643820 (9): Bad file descriptor 00:28:53.126 [2024-06-10 12:17:42.397743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:53.126 [2024-06-10 12:17:42.397754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:53.126 [2024-06-10 12:17:42.397763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:53.126 [2024-06-10 12:17:42.400503] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.126 [2024-06-10 12:17:42.402968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.126 12:17:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2377887 00:28:53.126 [2024-06-10 12:17:42.409846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:53.126 [2024-06-10 12:17:42.439698] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.097 00:29:03.097 Latency(us) 00:29:03.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.097 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:03.097 Verification LBA range: start 0x0 length 0x4000 00:29:03.097 Nvme1n1 : 15.01 8552.44 33.41 13483.47 0.00 5789.73 616.04 24536.68 00:29:03.097 =================================================================================================================== 00:29:03.097 Total : 8552.44 33.41 13483.47 0.00 5789.73 616.04 24536.68 00:29:03.097 12:17:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:03.097 12:17:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.097 12:17:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.097 12:17:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.097 rmmod nvme_tcp 00:29:03.097 rmmod nvme_fabrics 00:29:03.097 rmmod nvme_keyring 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2378760 ']' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2378760 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 2378760 ']' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 2378760 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2378760 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2378760' 00:29:03.097 killing process with pid 2378760 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 2378760 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 2378760 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.097 12:17:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.034 12:17:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:04.034 00:29:04.034 real 0m27.271s 00:29:04.034 user 1m2.198s 00:29:04.034 sys 0m8.004s 00:29:04.034 12:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:04.034 12:17:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.034 ************************************ 00:29:04.034 END TEST nvmf_bdevperf 00:29:04.034 ************************************ 00:29:04.034 12:17:53 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:04.034 12:17:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:04.034 12:17:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:04.034 12:17:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.034 ************************************ 00:29:04.034 START TEST nvmf_target_disconnect 00:29:04.034 ************************************ 00:29:04.034 12:17:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:04.293 * Looking for test storage... 00:29:04.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.293 12:17:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:04.294 12:17:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.855 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:10.856 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:10.856 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:10.856 Found net devices under 0000:af:00.0: cvl_0_0 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:10.856 Found net devices under 0000:af:00.1: cvl_0_1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:10.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:29:10.856 00:29:10.856 --- 10.0.0.2 ping statistics --- 00:29:10.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.856 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:10.856 00:29:10.856 --- 10.0.0.1 ping statistics --- 00:29:10.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.856 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.856 ************************************ 00:29:10.856 START TEST nvmf_target_disconnect_tc1 00:29:10.856 ************************************ 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:10.856 12:17:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.856 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.857 [2024-06-10 12:18:00.057137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-06-10 12:18:00.057187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200bec0 with addr=10.0.0.2, port=4420 00:29:10.857 [2024-06-10 12:18:00.057208] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:10.857 [2024-06-10 12:18:00.057219] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:10.857 [2024-06-10 12:18:00.057227] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:10.857 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:10.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:10.857 Initializing NVMe Controllers 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:10.857 00:29:10.857 real 0m0.122s 00:29:10.857 user 0m0.050s 00:29:10.857 sys 0m0.072s 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.857 ************************************ 00:29:10.857 END TEST nvmf_target_disconnect_tc1 00:29:10.857 ************************************ 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.857 ************************************ 00:29:10.857 START TEST nvmf_target_disconnect_tc2 00:29:10.857 ************************************ 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2384067 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2384067 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2384067 ']' 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:10.857 12:18:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.857 [2024-06-10 12:18:00.209955] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:10.857 [2024-06-10 12:18:00.210003] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.857 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.857 [2024-06-10 12:18:00.302518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.115 [2024-06-10 12:18:00.380567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.115 [2024-06-10 12:18:00.380604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.115 [2024-06-10 12:18:00.380614] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.115 [2024-06-10 12:18:00.380623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.115 [2024-06-10 12:18:00.380631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.115 [2024-06-10 12:18:00.380752] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:11.115 [2024-06-10 12:18:00.380865] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:11.115 [2024-06-10 12:18:00.380979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.115 [2024-06-10 12:18:00.380980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 Malloc0 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 [2024-06-10 12:18:01.089349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.680 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.681 [2024-06-10 12:18:01.117614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2384328 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:11.681 12:18:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.681 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.314 12:18:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2384067 00:29:14.314 12:18:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 [2024-06-10 12:18:03.147648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 [2024-06-10 12:18:03.147870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Read completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.314 Write completed with error (sct=0, sc=8) 00:29:14.314 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 [2024-06-10 12:18:03.148091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Write completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 Read completed with error (sct=0, sc=8) 00:29:14.315 starting I/O failed 00:29:14.315 [2024-06-10 12:18:03.148305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.315 [2024-06-10 12:18:03.148575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.148594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.148847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.148890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.149070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.149109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.149434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.149474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.149659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.149701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.149950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.149991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.150229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.150268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.150577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.150618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.150802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.150842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.151077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.151117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.315 qpair failed and we were unable to recover it. 00:29:14.315 [2024-06-10 12:18:03.151394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.315 [2024-06-10 12:18:03.151434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.151781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.151838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.152027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.152071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.152318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.152359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.152604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.152647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.152951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.152993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.153254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.153295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.153536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.153577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.153864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.153913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.154097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.154137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.154399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.154440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.154682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.154723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.155061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.155101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.155359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.155400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.155661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.155703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.155981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.156022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.156338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.156352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.156463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.156480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.156601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.156615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.156855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.156869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.156956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.156968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.157207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.157221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.157446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.157459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.157682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.157696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.157871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.157885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.158123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.158137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.158355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.316 [2024-06-10 12:18:03.158369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.316 qpair failed and we were unable to recover it. 00:29:14.316 [2024-06-10 12:18:03.158540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.158554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.158739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.158779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.158921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.158962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.159194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.159235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.159558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.159599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.159845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.159886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.160218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.160259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.160578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.160619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.160902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.160943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.161285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.161327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.161600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.161642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.161925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.161966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.162248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.162288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.162560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.162583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.162677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.162690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.162894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.162908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.163124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.163137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.163310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.163324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.163418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.163430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.163578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.163591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.163749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.163789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.164070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.164116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.164340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.164381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.164635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.164676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.164854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.164895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.165115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.165156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.165454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.165519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.165823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.165864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.166176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.166216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.166465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.317 [2024-06-10 12:18:03.166516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.317 qpair failed and we were unable to recover it. 00:29:14.317 [2024-06-10 12:18:03.166815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.166855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.167077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.167117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.167438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.167492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.167800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.167841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.168080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.168093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.168349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.168361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.168623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.168665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.168998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.169039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.169344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.169356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.169624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.169666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.169951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.169993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.170252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.170265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.170488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.170502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.170753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.170767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.171004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.171018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.171271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.171284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.171530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.171564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.171801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.171842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.172071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.172085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.172359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.172400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.172657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.172699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.173000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.173041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.173198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.173238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.173471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.173490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.173727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.173741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.173911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.173924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.174092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.174133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.174410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.174450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.174674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.174715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.174951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.174991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.175268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.175281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.175440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.175455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.175699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.175741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.176065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.176105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.176290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.318 [2024-06-10 12:18:03.176303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.318 qpair failed and we were unable to recover it. 00:29:14.318 [2024-06-10 12:18:03.176581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.176595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.176838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.176851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.177068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.177082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.177262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.177303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.177527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.177569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.177821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.177861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.178154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.178196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.178471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.178509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.178738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.178779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.179018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.179059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.179345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.179391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.179651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.179693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.179991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.180032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.180332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.180372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.180556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.180599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.180809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.180850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.181143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.181184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.181485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.181498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.181718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.181732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.181898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.181912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.182025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.182038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.182267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.182280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.182450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.182500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.182784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.182825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.183084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.319 [2024-06-10 12:18:03.183098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.319 qpair failed and we were unable to recover it. 00:29:14.319 [2024-06-10 12:18:03.183332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.183346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.183625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.183666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.183983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.184024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.184300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.184342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.184553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.184595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.184840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.184888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.185047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.185061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.185233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.185273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.185567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.185609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.185821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.185862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.186071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.186112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.186387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.186402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.186662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.186675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.186886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.186899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.187121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.187134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.187381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.187421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.187645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.187687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.188011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.188052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.188280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.188321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.188561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.188604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.188887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.188928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.189235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.189276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.189576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.189618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.320 qpair failed and we were unable to recover it. 00:29:14.320 [2024-06-10 12:18:03.189918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.320 [2024-06-10 12:18:03.189958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.190255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.190296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.190590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.190633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.190912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.190953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.191195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.191236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.191473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.191523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.191738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.191779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.191991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.192031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.192264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.192277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.192431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.192472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.192663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.192705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.192938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.192979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.193305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.193346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.193621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.193663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.193895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.193937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.194133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.194174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.194383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.194424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.194666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.194708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.194939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.194981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.195270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.195311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.195623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.195664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.195906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.195947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.196160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.196201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.196447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.196496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.196670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.196711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.196920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.196961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.197246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.197260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.197420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.197433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.197593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.197608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.197793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.197834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.198091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.198132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.198363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.198404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.198717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.198759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.199064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.199105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.199404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.199444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.199642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.199684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.199892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.199933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.200238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.200278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.200577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.200619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.200908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.200949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.201229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.201270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.201567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.201609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.201916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.201957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.202220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.202261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.202558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.202600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.202894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.202934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.203081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.203122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.203321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.203335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.321 [2024-06-10 12:18:03.203576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.321 [2024-06-10 12:18:03.203618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.321 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.203830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.203871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.204165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.204178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.204417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.204446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.204669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.204711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.204994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.205040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.205198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.205212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.205383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.205424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.205737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.205779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.205953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.205995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.206270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.206312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.206523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.206565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.206814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.206855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.207140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.207181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.207369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.207382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.207557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.207570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.207741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.207782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.208002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.208044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.208349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.208390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.208656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.208697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.208859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.208907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.209210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.209251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.209574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.209615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.209954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.210196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.210237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.210546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.210559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.210721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.210734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.210980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.211021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.211322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.211363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.211576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.211589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.211755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.211797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.211958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.211999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.212318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.212359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.212664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.212706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.212957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.212999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.213312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.213353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.213521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.213562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.213863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.213904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.214195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.214236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.214499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.214540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.214835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.214876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.215165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.215206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.215434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.215475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.215802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.215843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.216092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.216133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.216437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.216488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.322 [2024-06-10 12:18:03.216775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.322 [2024-06-10 12:18:03.216816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.322 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.216987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.217029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.217236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.217249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.217472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.217522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.217692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.217733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.217952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.217993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.218212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.218253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.218565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.218607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.218817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.218858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.219109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.219150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.219413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.219427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.219574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.219588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.219859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.219900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.220129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.220170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.220345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.220391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.220670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.220712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.221008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.221049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.221326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.221367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.221642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.221656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.221813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.221826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.222047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.222088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.222390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.222430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.222708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.222749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.222987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.223028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.223270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.223311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.223635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.223677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.223928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.223969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.224219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.224259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.224567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.224609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.224897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.224938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.225236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.225276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.225573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.225615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.225933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.225974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.226275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.226316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.226607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.226648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.226953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.226993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.227263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.227304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.227604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.227617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.227829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.227842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.228002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.228015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.228181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.228222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.228521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.228602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.228963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.229019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.229218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.229255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.229377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.229392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.229611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.229624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.229838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.229851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.323 qpair failed and we were unable to recover it. 00:29:14.323 [2024-06-10 12:18:03.230016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.323 [2024-06-10 12:18:03.230029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.324 qpair failed and we were unable to recover it. 00:29:14.324 [2024-06-10 12:18:03.230255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.324 [2024-06-10 12:18:03.230268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.324 qpair failed and we were unable to recover it. 00:29:14.324 [2024-06-10 12:18:03.230370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.324 [2024-06-10 12:18:03.230382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.324 qpair failed and we were unable to recover it. 00:29:14.324 [2024-06-10 12:18:03.230557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.324 [2024-06-10 12:18:03.230598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.324 qpair failed and we were unable to recover it. 00:29:14.324 [2024-06-10 12:18:03.230775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.324 [2024-06-10 12:18:03.230816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.324 qpair failed and we were unable to recover it. 00:29:14.324 [2024-06-10 12:18:03.231087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.324 [2024-06-10 12:18:03.231128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.324 qpair failed and we were unable to recover it. 00:29:14.324 [2024-06-10 12:18:03.231446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.231496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.231715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.231762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.231997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.232038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.232266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.232307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.232455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.232468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.232726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.232768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.233046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.233087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.233398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.233439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.233752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.233793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.233977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.234017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.234259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.234272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.234372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.234413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.234703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.234745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.234980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.235021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.235330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.235371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.235652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.235665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.235888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.235901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.236116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.236129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.236288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.236302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.236496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.236538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.236835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.236875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.237114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.326 [2024-06-10 12:18:03.237127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.326 qpair failed and we were unable to recover it. 00:29:14.326 [2024-06-10 12:18:03.237348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.237390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.237611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.237653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.237977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.238018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.238340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.238381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.238663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.238705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.239022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.239062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.239302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.239343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.239634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.239676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.239888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.239929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.240207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.240248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.240497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.240540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.240725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.240767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.241066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.241106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.241363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.241404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.241704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.241745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.241961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.242001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.242247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.242288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.242615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.242657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.242985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.243029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.243234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.243249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.243460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.243474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.243706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.243719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.243905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.243946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.244199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.244239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.244531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.244573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.244854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.244896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.245195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.245235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.245462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.245515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.245740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.245781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.246091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.246132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.246380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.246433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.246719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.246732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.246949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.246963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.247127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.247140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.247383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.247425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.247718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.247759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.247970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.248011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.248344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.248384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.248521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.248533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.248753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.248794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.249005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.249046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.249274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.249314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.249635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.249676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.250018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.250077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.250385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.250425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.250734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.250747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.250975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.250988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.251088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.251100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.251307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.251348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.251731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.251771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.251999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.252040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.252345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.252386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.252566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.252579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.252812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.252853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.253157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.253197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.253454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.253467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.253685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.253699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.253880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.253922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.254174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.327 [2024-06-10 12:18:03.254215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.327 qpair failed and we were unable to recover it. 00:29:14.327 [2024-06-10 12:18:03.254486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.254501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.254660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.254701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.255029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.255070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.255286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.255299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.255541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.255555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.255790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.255803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.256026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.256039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.256204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.256217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.256464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.256482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.256643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.256657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.256897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.256911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.257145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.257158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.257395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.257436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.257676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.257717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.258000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.258041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.258320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.258360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.258595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.258636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.258958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.258999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.259297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.259338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.259611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.259652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.259956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.259997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.260280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.260332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.260496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.260510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.260659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.260672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.260879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.260920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.261092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.261132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.261432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.261473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.261718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.261760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.262064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.262104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.262348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.262361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.262623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.262636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.262816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.262829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.262999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.263040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.263331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.263344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.263520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.263561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.263774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.263815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.263978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.264018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.264311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.264324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.264428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.264469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.264647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.264688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.264918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.264965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.265244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.265285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.265584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.265625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.265950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.265991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.266269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.266311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.266546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.266588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.266915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.266956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.267252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.267293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.267440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.267503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.267790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.267830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.268048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.268090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.268299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.268340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.268565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.268579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.268770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.268811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.269046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.269087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.269404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.269427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.269600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.269614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.269772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.269785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.269942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.269977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.270269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.270310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.270610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.270652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.270883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.270924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.271243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.271284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.271532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.271574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.271807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.271848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.272126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.328 [2024-06-10 12:18:03.272167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.328 qpair failed and we were unable to recover it. 00:29:14.328 [2024-06-10 12:18:03.272442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.272456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.272614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.272627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.272850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.272891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.273220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.273261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.273486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.273500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.273695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.273709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.273869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.273883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.274131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.274172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.274403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.274444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.274732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.274745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.274997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.275039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.275316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.275357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.275628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.275670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.275952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.275965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.276160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.276173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.276338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.276351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.276465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.276483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.276688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.276728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.276953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.276994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.277314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.277355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.277593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.277635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.277913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.277954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.278256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.278298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.278577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.278618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.278891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.278932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.279215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.279264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.279532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.279567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.279847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.279888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.280127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.280168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.280515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.280558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.280813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.280855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.281131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.281185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.281442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.281455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.281621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.281635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.281871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.281884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.282098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.282111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.282343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.282384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.282601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.282642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.282857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.282898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.283152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.283193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.283520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.283534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.283722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.283772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.283923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.283963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.284272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.284314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.284608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.284621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.284865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.284900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.285140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.285181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.285328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.285369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.285640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.285653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.285812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.285853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.286131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.286172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.286384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.286398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.286550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.286562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.286803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.286844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.287067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.287108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.287355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.287396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.287740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.287782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.288081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.288121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.288335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.288376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.288592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.288606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.288756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.288769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.288983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.289035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.289355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.289395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.289718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.329 [2024-06-10 12:18:03.289760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.329 qpair failed and we were unable to recover it. 00:29:14.329 [2024-06-10 12:18:03.290060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.290101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.290386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.290399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.290512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.290525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.290776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.290817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.291034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.291075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.291412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.291425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.291617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.291632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.291802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.291816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.292007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.292048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.292300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.292340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.292643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.292685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.292984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.293026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.293240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.293281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.293561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.293603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.293780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.293821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.294048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.294089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.294375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.294416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.294578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.294625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.294892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.294925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.295212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.295254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.295509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.295550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.295773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.295814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.296029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.296071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.296324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.296365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.296590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.296633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.296963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.297004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.297262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.297303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.297607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.297649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.297881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.297923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.298190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.298231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.298532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.298562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.298849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.298890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.299185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.299226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.299530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.299571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.299852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.299893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.300194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.300233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.300496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.300530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.300761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.300802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.301101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.301142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.301376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.301420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.301638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.301652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.301837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.301851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.302098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.302139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.302360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.302401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.302720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.302735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.302898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.302912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.303093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.303134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.303362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.303376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.303541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.303555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.303739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.303779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.303993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.304034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.304271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.304312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.304531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.304545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.304702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.304744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.305029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.305070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.305282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.305323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.305540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.305553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.305777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.305824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.306105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.306147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.306448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.306498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.306683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.306725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.307014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.307056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.307440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.307492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.307716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.307729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.307951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.307992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.308318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.308359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.308613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.308627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.308881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.308934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.309161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.330 [2024-06-10 12:18:03.309202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.330 qpair failed and we were unable to recover it. 00:29:14.330 [2024-06-10 12:18:03.309431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.309474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.309721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.309753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.310073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.310115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.310277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.310318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.310589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.310603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.310825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.310866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.311148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.311189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.311497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.311538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.311791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.311833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.312060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.312101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.312416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.312457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.312715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.312756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.313057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.313098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.313327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.313369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.313606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.313620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.313868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.313900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.314194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.314235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.314548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.314562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.314835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.314876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.315100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.315142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.315365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.315406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.315706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.315747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.315981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.316022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.316251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.316292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.316567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.316581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.316731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.316768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.317063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.317104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.317410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.317451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.317750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.317797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.318105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.318145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.318427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.318468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.318651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.318692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.318919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.318933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.319021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.319034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.319339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.319381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.319546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.319588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.319863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.319891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.320187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.320228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.320529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.320571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.320927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.320968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.321286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.321327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.321471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.321489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.321732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.321746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.321986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.322000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.322118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.322132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.322360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.322401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.322653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.322694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.323006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.323065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.323296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.323337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.323553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.323595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.323884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.323898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.324067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.324081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.324344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.324385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.324561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.324575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.324729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.324743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.324940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.324981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.325289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.325330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.331 qpair failed and we were unable to recover it. 00:29:14.331 [2024-06-10 12:18:03.325636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.331 [2024-06-10 12:18:03.325678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.325899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.325940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.326120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.326161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.326379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.326420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.326641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.326655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.326829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.326871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.327039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.327080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.327277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.327318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.327607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.327643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.327875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.327917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.328257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.328298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.328585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.328632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.328792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.328833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.329129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.329171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.329405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.329447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.329699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.329741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.330069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.330112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.330419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.330461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.330652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.330667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.330916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.330931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.331104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.331118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.331300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.331341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.331633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.331648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.331869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.331882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.332083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.332097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.332320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.332334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.332555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.332570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.332683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.332696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.332872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.332914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.333205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.333247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.333597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.333639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.333866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.333907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.334219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.334260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.334560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.334602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.334854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.334895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.335119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.335162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.335489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.335532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.335750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.335798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.336053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.336090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.336376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.336423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.336685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.336700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.336925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.336939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.337161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.337176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.337423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.337458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.337710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.337752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.338004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.338057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.338358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.338401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.338650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.338664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.338903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.338918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.339042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.339083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.339305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.339347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.339655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.339704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.339963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.340007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.340301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.340344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.340657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.340701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.340947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.340990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.341306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.341347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.341653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.341667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.341886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.341928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.342120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.342162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.342452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.342504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.342797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.342840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.343083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.332 [2024-06-10 12:18:03.343126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.332 qpair failed and we were unable to recover it. 00:29:14.332 [2024-06-10 12:18:03.343357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.343398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.343629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.343644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.343884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.343898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.344073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.344114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.344425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.344467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.344646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.344662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.344863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.344877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.345068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.345109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.345396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.345437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.345733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.345748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.345974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.345988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.346168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.346182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.346406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.346421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.346653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.346667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.346840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.346855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.347076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.347117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.347340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.347382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.347613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.347628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.347753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.347768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.347899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.347913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.348202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.348244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.348568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.348611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.348848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.348890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.349181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.349223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.349471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.349520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.349605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.349618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.349728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.349741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.349843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.349856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.350031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.350077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.350327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.350369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.350661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.350704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.350921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.350963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.351148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.351189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.351488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.351530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.351832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.351846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.352008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.352022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.352217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.352258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.352552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.352594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.352881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.352895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.353139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.353153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.353269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.353284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.353536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.353578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.353875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.353917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.354204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.354246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.354561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.354601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.354845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.354859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.355059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.355073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.355300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.355314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.355444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.355494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.355741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.355782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.356044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.356086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.356372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.356413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.356769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.356811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.356990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.357032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.357348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.357390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.357638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.357680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.357980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.357994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.358237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.358251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.358499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.358536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.358779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.358820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.359007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.359048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.359332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.359374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.359657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.359693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.359884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.359924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.360163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.360204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.360422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.360463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.360760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.360802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.361119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.361159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.361473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.361532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.361806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.361848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.362016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.333 [2024-06-10 12:18:03.362058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.333 qpair failed and we were unable to recover it. 00:29:14.333 [2024-06-10 12:18:03.362372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.362414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.362732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.362746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.362852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.362865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.363126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.363140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.363321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.363335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.363583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.363597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.363725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.363739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.363959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.363973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.364172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.364186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.364351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.364366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.364558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.364572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.364683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.364696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.364877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.364891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.365129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.365143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.365332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.365346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.365577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.365592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.365777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.365791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.366029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.366043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.366147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.366171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.366414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.366428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.366622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.366636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.366824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.366837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.367086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.367111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.367369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.367383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.367632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.367647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.367818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.367833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.368057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.368071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.368250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.368265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.368533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.368548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.368720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.368734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.369000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.369014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.369148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.369163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.369396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.369410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.369588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.369602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.369764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.369778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.370044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.370059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.370281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.370295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.370544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.370567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.370762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.370777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.371017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.371033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.371209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.371223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.371391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.371405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.371659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.371702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.371919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.371961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.372228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.372269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.372582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.372624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.372939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.372953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.373126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.373140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.373378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.373418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.373766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.373801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.374063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.374099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.374342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.374383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.374699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.374729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.374907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.374949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.375168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.375209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.375420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.375435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.375608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.375651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.375941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.375982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.376240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.376282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.376529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.376571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.376828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.376842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.377068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.377082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.377335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.334 [2024-06-10 12:18:03.377385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.334 qpair failed and we were unable to recover it. 00:29:14.334 [2024-06-10 12:18:03.377642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.377685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.377919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.377961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.378182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.378223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.378442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.378456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.378650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.378692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.379027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.379068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.379355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.379396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.379607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.379622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.379810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.379852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.380144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.380186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.380524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.380568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.380904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.380947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.381094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.381135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.381434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.381487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.381763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.381810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.382008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.382050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.382274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.382315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.382619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.382664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.382847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.382889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.383133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.383175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.383499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.383541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.383689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.383731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.384043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.384084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.384378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.384419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.384717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.384732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.384889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.384903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.385114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.385128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.385374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.385388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.385622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.385636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.385791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.385805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.385973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.386014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.386256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.386297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.386626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.386669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.386817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.386858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.387078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.387119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.387421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.387458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.387718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.387732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.387902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.387916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.388187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.388233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.388543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.388586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.388830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.388871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.389102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.389142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.389471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.389523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.389862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.389904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.390166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.390207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.390511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.390553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.390864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.390906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.391154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.391194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.391505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.391547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.391802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.391843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.392099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.392140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.392402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.392443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.392686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.392700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.392947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.392984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.393288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.393335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.393642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.393686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.393854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.393896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.394117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.394159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.394463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.394517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.394758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.394800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.395135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.395177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.395421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.395463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.395780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.395822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.396100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.396142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.396368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.396409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.396645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.396687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.396954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.396968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.397139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.397153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.397423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.397465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.397770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.397811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.397940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.397953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.398212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.398253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.398542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.335 [2024-06-10 12:18:03.398585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.335 qpair failed and we were unable to recover it. 00:29:14.335 [2024-06-10 12:18:03.398894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.398909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.399182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.399196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.399443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.399457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.399637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.399652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.399831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.399846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.400087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.400102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.400282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.400297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.400499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.400514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.400691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.400706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.400871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.400886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.401150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.401164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.401355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.401370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.401456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.401468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.401703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.401718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.401988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.402002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.402225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.402240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.402491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.402506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.402659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.402674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.402826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.402840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.403010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.403024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.403180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.403194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.403436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.403452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.403559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.403572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.403751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.403765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.403988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.404003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.404229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.404244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.404533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.404547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.404670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.404685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.404907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.404921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.405018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.405030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.405220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.405235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.405415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.405430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.405622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.405637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.405886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.405900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.406092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.406106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.406373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.406388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.406562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.406577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.406878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.406892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.407159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.407174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.407351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.407365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.407654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.407668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.407892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.407906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.408149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.408164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.408387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.408401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.408583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.408597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.408772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.408786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.409034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.409048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.409237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.409251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.409485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.409499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.409727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.409741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.410017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.410031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.410285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.410300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.410499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.410514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.410796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.410811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.410974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.410988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.411231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.411245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.411436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.411451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.411700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.411715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.411890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.411904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.412078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.412092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.412341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.412355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.412599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.412613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.412785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.412799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.413022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.413037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.413248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.413262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.413425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.413439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.413664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.413678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.336 [2024-06-10 12:18:03.413848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.336 [2024-06-10 12:18:03.413862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.336 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.414083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.414097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.414344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.414358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.414613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.414628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.414865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.414879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.415054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.415068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.415227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.415241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.415436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.415450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.415694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.415708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.415863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.415877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.416105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.416119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.416279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.416294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.416466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.416485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.416642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.416656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.416748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.416760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.416918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.416931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.417086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.417100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.417412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.417426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.417599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.417614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.417770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.417785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.417940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.417954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.418107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.418122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.418379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.418393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.418511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.418524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.418737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.418751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.418993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.419007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.419278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.419293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.419534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.419549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.419670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.419684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.419922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.419936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.420177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.420191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.420413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.420428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.420650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.420664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.420757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.420770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.420993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.421007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.421094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.421107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.421292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.421306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.421535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.421550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.421813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.421827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.421995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.422009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.422255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.422269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.422444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.422458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.422694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.422709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.422866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.422880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.422977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.422990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.423229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.423243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.423343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.423356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.423468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.423487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.423673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.423687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.423843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.423857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.424125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.424139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.424361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.424375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.424624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.424639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.424834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.424848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.424959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.424971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.425191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.425205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.425297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.425309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.425424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.425437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.425657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.425671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.425916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.425930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.426195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.426210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.426433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.426449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.426629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.426643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.426796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.426810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.426968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.426980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.427153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.427167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.427272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.427286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.427474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.427494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.427659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.427673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.427914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.427927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.428156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.337 [2024-06-10 12:18:03.428169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.337 qpair failed and we were unable to recover it. 00:29:14.337 [2024-06-10 12:18:03.428400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.428414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.428605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.428619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.428839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.428853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.429073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.429087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.429333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.429347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.429599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.429613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.429779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.429793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.429985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.429999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.430242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.430256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.430549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.430564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.430728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.430742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.430973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.430987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.431101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.431113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.431224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.431238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.431490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.431504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.431672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.431686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.431804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.431818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.432068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.432082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.432168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.432181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.432419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.432433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.432608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.432622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.432742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.432756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.432910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.432924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.433175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.433189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.433306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.433320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.433579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.433593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.433832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.433846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.434081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.434095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.434337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.434350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.434515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.434529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.434752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.434770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.434878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.434891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.435062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.435076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.435313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.435327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.435421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.435433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.435540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.435553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.435835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.435849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.435999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.436013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.436172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.436185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.436350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.436364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.436610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.436624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.436872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.436885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.437126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.437140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.437292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.437305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.437502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.437517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.437633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.437647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.437809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.437823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.438070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.438084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.438280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.438294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.438459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.438473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.438715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.438729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.438946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.438960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.439133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.439147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.439339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.439354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.439578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.439591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.439832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.439846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.440029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.440043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.440260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.440273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.440516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.440530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.440758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.440772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.440992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.441005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.441240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.441254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.441485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.441499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.441665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.441679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.441850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.441863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.442084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.442097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.442345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.442359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.442624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.442638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.442888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.442901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.443147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.443161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.443311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.443327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.443513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.443527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.443765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.338 [2024-06-10 12:18:03.443778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.338 qpair failed and we were unable to recover it. 00:29:14.338 [2024-06-10 12:18:03.443870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.443882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.444059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.444072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.444308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.444322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.444412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.444424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.444591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.444605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.444818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.444832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.445049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.445063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.445280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.445294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.445457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.445471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.445647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.445661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.445881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.445894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.446050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.446065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.446229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.446243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.446462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.446481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.446702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.446716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.446874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.446888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.447094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.447107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.447343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.447357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.447456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.447469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.447668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.447682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.447930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.447944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.448184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.448198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.448370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.448383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.448563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.448577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.448752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.448766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.448937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.448950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.449107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.449120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.449300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.449314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.449397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.449410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.449580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.449594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.449856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.449869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.450089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.450102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.450345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.450358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.450582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.450596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.450845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.450858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.451014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.451028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.451245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.451258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.451483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.451499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.451764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.451777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.451861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.451873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.452058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.452072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.452226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.452240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.452467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.452486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.452659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.452672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.452855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.452868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.453109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.453123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.453307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.453321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.453561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.453575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.453743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.453757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.453854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.453866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.454126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.454139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.454361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.454374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.454523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.454536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.454696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.454710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.454816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.454829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.454948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.454961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.455112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.455126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.455353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.455366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.455462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.455474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.455720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.455734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.455934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.455949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.456220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.456234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.456483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.456497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.456615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.456629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.456860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.456874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.457025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.457039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.457210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.457223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.457392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.339 [2024-06-10 12:18:03.457406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.339 qpair failed and we were unable to recover it. 00:29:14.339 [2024-06-10 12:18:03.457647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.457661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.457759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.457772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.458022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.458035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.458222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.458236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.458399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.458413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.458514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.458527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.458627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.458644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.458884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.458898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.459050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.459063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.459229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.459244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.459472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.459489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.459675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.459688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.459854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.459868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.460087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.460100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.460319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.460332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.460559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.460573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.460742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.460756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.460948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.460961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.461126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.461140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.461258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.461271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.461361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.461374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.461463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.461485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.461654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.461667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.461820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.461834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.462056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.462070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.462295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.462308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.462494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.462508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.462762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.462776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.462953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.462966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.463128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.463142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.463387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.463401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.463569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.463583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.463803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.463817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.464060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.464073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.464256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.464270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.464463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.464482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.464717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.464730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.464968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.464981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.465137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.465150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.465317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.465330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.465568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.465582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.465774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.465787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.465885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.465898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.466000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.466012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.466225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.466239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.466481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.466495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.466753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.466767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.466996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.467009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.467251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.467264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.467508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.467524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.467697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.467710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.467926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.467940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.468191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.468204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.468424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.468437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.468588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.468602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.468715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.468727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.468836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.468850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.469066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.469079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.469241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.469254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.469420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.469434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.469615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.469628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.469881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.469894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.470061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.470074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.470239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.470252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.470486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.470499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.470648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.470662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.470811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.470824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.470903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.470915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.471072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.471086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.471255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.471269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.471506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.471519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.471771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.471784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.472004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.472018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.472264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.340 [2024-06-10 12:18:03.472277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.340 qpair failed and we were unable to recover it. 00:29:14.340 [2024-06-10 12:18:03.472445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.472458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.472561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.472573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.472832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.472845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.473006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.473019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.473133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.473145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.473323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.473336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.473568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.473582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.473773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.473786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.474002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.474015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.474180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.474193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.474434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.474447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.474596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.474610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.474829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.474842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.474951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.474965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.475155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.475169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.475365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.475382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.475495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.475507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.475609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.475621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.475859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.475873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.476038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.476051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.476277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.476290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.476485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.476498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.476619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.476632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.476782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.476795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.476960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.476974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.477155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.477168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.477390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.477403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.477613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.477626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.477798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.477812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.478047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.478060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.478208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.478221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.478384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.478398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.478549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.478563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.478799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.478813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.479051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.479064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.479230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.479243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.479393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.479407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.479627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.479641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.479873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.479887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.480053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.480066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.480227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.480241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.480418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.480431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.480607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.480622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.480819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.480833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.481069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.481082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.481248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.481261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.481474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.481499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.481658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.481671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.481820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.481834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.482017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.482030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.482220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.482233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.482488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.482502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.482739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.482753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.482943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.482956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.483173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.483187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.483445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.483460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.483740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.483754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.483856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.483868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.484085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.484098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.484379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.484392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.484626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.484640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.484854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.484867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.485021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.485034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.485243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.485257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.485491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.485504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.485669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.485683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.485914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.485927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.341 qpair failed and we were unable to recover it. 00:29:14.341 [2024-06-10 12:18:03.486142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.341 [2024-06-10 12:18:03.486155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.486388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.486402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.486689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.486702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.486860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.486874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.487110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.487123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.487336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.487349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.487523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.487537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.487773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.487787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.488045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.488058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.488314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.488328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.488544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.488557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.488795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.488809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.488958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.488971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.489187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.489200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.489385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.489399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.489637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.489651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.489869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.489883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.490108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.490121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.490307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.490321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.490536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.490550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.490666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.490680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.490790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.490803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.490968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.490982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.491194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.491207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.491365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.491385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.491639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.491653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.491758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.491770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.492006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.492019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.492233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.492248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.492332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.492344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.492577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.492591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.492804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.492817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.492921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.492933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.493153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.493167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.493344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.493358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.493542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.493556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.493798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.493811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.494022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.494035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.494294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.494307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.494467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.494484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.494590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.494603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.494866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.494879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.495121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.495135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.495297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.495310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.495547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.495561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.495729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.495742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.495920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.495933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.496174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.496187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.496337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.496349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.496512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.496526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.496744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.496757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.496924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.496937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.497093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.497107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.497366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.497380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.497603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.497617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.497857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.497870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.498115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.498129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.498371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.498384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.498543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.498556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.498653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.498666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.498903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.498916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.499100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.499113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.499290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.499304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.499544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.499558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.499740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.499754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.499915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.499928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.500169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.500183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.500423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.500436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.500650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.500665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.500893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.500907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.501047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.501060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.501227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.501240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.501399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.501413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.501582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.342 [2024-06-10 12:18:03.501595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.342 qpair failed and we were unable to recover it. 00:29:14.342 [2024-06-10 12:18:03.501743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.501756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.501943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.501956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.502192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.502205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.502407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.502421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.502508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.502521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.502707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.502720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.502932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.502945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.503183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.503196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.503430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.503443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.503592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.503606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.503791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.503805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.503990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.504004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.504241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.504254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.504357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.504370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.504616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.504630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.504786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.504800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.505033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.505046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.505212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.505226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.505450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.505464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.505561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.505574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.505830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.505844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.506061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.506074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.506328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.506341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.506517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.506530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.506765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.506778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.506943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.506956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.507103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.507116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.507352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.507365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.507591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.507605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.507764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.507777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.507963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.507976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.508227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.508240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.508453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.508467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.508735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.508749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.508990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.509004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.509238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.509251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.509493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.509506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.509750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.509763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.510008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.510021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.510178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.510191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.510405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.510418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.510589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.510602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.510774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.510787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.510936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.510949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.511055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.511068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.511302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.511316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.511542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.511555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.511818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.511832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.512003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.512017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.512180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.512194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.512365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.512378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.512491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.512503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.512718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.512731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.512814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.512826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.513003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.513016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.513258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.513272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.513420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.513434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.513585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.513597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.513769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.513783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.513947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.513960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.514191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.514204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.514439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.514452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.514549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.514561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.514732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.514745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.514946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.514959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.515123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.515136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.515306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.515319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.515482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.515496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.515709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.515723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.515964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.515977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.516068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.516080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.516190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.516203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.516432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.516446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.343 qpair failed and we were unable to recover it. 00:29:14.343 [2024-06-10 12:18:03.516593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.343 [2024-06-10 12:18:03.516605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.516753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.516766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.516937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.516950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.517171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.517184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.517344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.517357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.517453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.517465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.517624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.517638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.517810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.517823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.518038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.518051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.518264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.518277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.518493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.518507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.518721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.518734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.518968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.518981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.519079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.519091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.519303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.519316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.519486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.519499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.519737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.519750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.519987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.520000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.520237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.520250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.520417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.520431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.520620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.520633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.520863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.520876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.520969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.520982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.521195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.521209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.521319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.521332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.521577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.521590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.521841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.521854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.522088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.522101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.522312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.522327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.522609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.522622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.522708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.522721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.522953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.522966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.523180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.523193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.523414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.523427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.523643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.523657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.523815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.523828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.523935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.523948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.524188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.524201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.524443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.524457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.524673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.524686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.524910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.524923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.525204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.525217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.525382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.525395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.525573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.525586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.525754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.525768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.526004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.526017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.526228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.526241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.526419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.526432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.526604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.526618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.526764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.526777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.526936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.526949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.527184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.527198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.527433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.527446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.527707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.527720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.527871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.527884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.528122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.528135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.528398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.528411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.528556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.528570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.528687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.528700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.528932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.528946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.529104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.529117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.529263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.529276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.529513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.529526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.529724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.344 [2024-06-10 12:18:03.529737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.344 qpair failed and we were unable to recover it. 00:29:14.344 [2024-06-10 12:18:03.529974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.529987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.530226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.530239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.530436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.530449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.530663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.530677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.530938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.530953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.531118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.531131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.531296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.531309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.531532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.531546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.531713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.531726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.531966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.531980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.532173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.532187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.532346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.532359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.532574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.532588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.532702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.532716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.532965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.532978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.533135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.533148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.533342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.533356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.533502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.533516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.533703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.533716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.533864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.533877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.534118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.534132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.534298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.534311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.534457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.534470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.534750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.534764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.534930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.534943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.535116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.535129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.535233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.535246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.535495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.535509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.535683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.535696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.535787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.535799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.536060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.536073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.536317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.536331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.536552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.536565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.536824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.536837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.537007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.537020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.537193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.537206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.537422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.537435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.537605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.537619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.537860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.537873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.538107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.538121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.538213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.538226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.538480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.538493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.538707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.538720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.538986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.538999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.539215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.539230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.539443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.539456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.539700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.539714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.539981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.539994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.540165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.540179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.540338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.540352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.540451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.540463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.540724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.540737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.540948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.540962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.541187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.541200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.541415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.541428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.541576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.541589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.541825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.541839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.541925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.541938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.542151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.542165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.542347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.542360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.542581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.542594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.542831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.542844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.542927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.542939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.543110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.543123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.543288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.543302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.543533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.543547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.543733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.543746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.543950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.543964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.544114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.544127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.544369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.544382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.544550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.544563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.544652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.345 [2024-06-10 12:18:03.544665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.345 qpair failed and we were unable to recover it. 00:29:14.345 [2024-06-10 12:18:03.544885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.544899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.545057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.545070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.545253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.545266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.545456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.545469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.545675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.545689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.545929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.545943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.546112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.546126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.546339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.546353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.546513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.546527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.546675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.546689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.546902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.546915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.547075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.547089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.547251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.547266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.547346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.547359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.547596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.547609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.547843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.547857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.547975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.547989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.548226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.548239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.548358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.548371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.548610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.548623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.548859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.548872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.549033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.549046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.549205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.549219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.549329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.549341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.549582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.549595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.549773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.549787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.550025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.550039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.550204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.550218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.550455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.550469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.550713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.550726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.550896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.550909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.551147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.551160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.551340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.551354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.551534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.551548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.551697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.551711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.551953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.551966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.552077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.552091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.552241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.552254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.552522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.552536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.552637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.552650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.552814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.552827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.553080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.553120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.553341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.553382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.553615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.553657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.553978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.554019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.554268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.554281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.554521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.554549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.554805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.554846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.555130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.555172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.555385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.555426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.555715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.555757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.555913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.555954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.556165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.556211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.556360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.556401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.556712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.556755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.556868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.556880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.557062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.557103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.557402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.557442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.557781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.557822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.558094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.558107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.558263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.558304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.558601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.558646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.558813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.558854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.559155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.559196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.559497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.559539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.559764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.559805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.560123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.346 [2024-06-10 12:18:03.560165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.346 qpair failed and we were unable to recover it. 00:29:14.346 [2024-06-10 12:18:03.560445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.560458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.560758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.560800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.561009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.561050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.561298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.561339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.561527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.561569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.561864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.561906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.562193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.562234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.562397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.562437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.562810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.562890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.563128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.563173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.563435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.563493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.563773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.563816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.564055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.564097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.564397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.564415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.564696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.564738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.565043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.565085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.565303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.565321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.565578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.565619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.565890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.565930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.566183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.566223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.566434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.566451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.566706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.566724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.566946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.566964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.567195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.567235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.567511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.567553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.567838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.567884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.568185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.568225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.568567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.568609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.568908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.568948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.569235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.569275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.569570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.569613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.569837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.569878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.570105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.570145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.570470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.570520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.570765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.570806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.571111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.571151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.571463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.571512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.571789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.571829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.572128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.572167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.572410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.572451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.347 qpair failed and we were unable to recover it. 00:29:14.347 [2024-06-10 12:18:03.572675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.347 [2024-06-10 12:18:03.572716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.572957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.572998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.573303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.573342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.573624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.573665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.573829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.573847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.574027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.574067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.574368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.574408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.574669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.574710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.574924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.574965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.575262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.575280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.575459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.575510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.575733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.575774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.576101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.576119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.576367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.576385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.576632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.576667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.576949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.576989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.577230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.577270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.577586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.577628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.577849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.577899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.578058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.578075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.578312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.578353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.578658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.578699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.578923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.578963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.579275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.579315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.579488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.579530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.579818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.579865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.580012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.580029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.580279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.580297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.580539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.580581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.580859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.580900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.581200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.581248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.581494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.581512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.581714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.581731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.581977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.581995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.582236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.582277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.582563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.582604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.582908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.582947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.583176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.583216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.583429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.583447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.583692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.583710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.583947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.583964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.584249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.584289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.348 qpair failed and we were unable to recover it. 00:29:14.348 [2024-06-10 12:18:03.584613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.348 [2024-06-10 12:18:03.584655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.584932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.584972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.585249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.585289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.585621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.585639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.585819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.585837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.586088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.586127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.586306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.586347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.586623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.586665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.586948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.586988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.587310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.587351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.587725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.587805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.588081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.588096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.588304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.588346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.588608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.588651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.588889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.588931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.589160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.589202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.589515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.589556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.589832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.589874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.590090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.590131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.590401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.590414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.590672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.590686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.590932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.590946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.591171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.591185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.591368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.591385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.591567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.591608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.591817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.591858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.592142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.592156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.592371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.592385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.592604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.592618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.592782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.592796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.593013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.593026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.593166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.593208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.593492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.593533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.593757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.593798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.594075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.594116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.594353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.594393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.594644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.594685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.595013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.595054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.595354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.595394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.595540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.595582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.595856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.595897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.596110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.349 [2024-06-10 12:18:03.596150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.349 qpair failed and we were unable to recover it. 00:29:14.349 [2024-06-10 12:18:03.596428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.596468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.596780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.596822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.597120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.597160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.597462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.597517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.597815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.597856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.598133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.598173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.598455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.598508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.598739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.598780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.599079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.599123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.599406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.599445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.599760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.599801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.600049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.600091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.600388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.600429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.600687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.600729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.600952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.600992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.601309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.601350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.601615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.601657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.601946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.601987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.602286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.602326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.602609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.602651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.602956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.602996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.603297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.603344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.603570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.603612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.603896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.603936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.604218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.604257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.604530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.604549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.604779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.604819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.605043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.605083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.605397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.605415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.605677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.605695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.605943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.605980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.606307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.606347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.606574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.606615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.606863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.606903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.607093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.350 [2024-06-10 12:18:03.607111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.350 qpair failed and we were unable to recover it. 00:29:14.350 [2024-06-10 12:18:03.607370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.607411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.607630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.607671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.607941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.607981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.608265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.608283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.608378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.608394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.608650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.608692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.608918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.608958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.609186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.609204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.609426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.609444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.609622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.609640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.609890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.609908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.610117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.610157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.610320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.610361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.610588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.610631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.610926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.610966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.611284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.611324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.611596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.611637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.611923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.611963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.612208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.612247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.612545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.612586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.612799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.612840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.613138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.613179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.613505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.613547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.613844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.613884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.614178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.614218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.614442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.614496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.614743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.614789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.615093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.615133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.615341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.615359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.615648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.615690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.615966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.616006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.616291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.616332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.616638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.616679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.616909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.616949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.617274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.617314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.617553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.617595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.617805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.617846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.618135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.618175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.618487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.618529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.618773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.618814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.619110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.619128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.619377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.619415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.619755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.619797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.620026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.620066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.351 [2024-06-10 12:18:03.620383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.351 [2024-06-10 12:18:03.620423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.351 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.620656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.620698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.620921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.620960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.621170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.621212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.621512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.621553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.621803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.621844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.622171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.622212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.622512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.622553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.622832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.622873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.623067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.623085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.623260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.623278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.623529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.623571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.623787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.623828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.624108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.624148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.624447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.624496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.624772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.624813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.625043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.625084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.625337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.625377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.625537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.625580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.625750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.625790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.626044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.626084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.626253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.626294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.626605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.626652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.626954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.626994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.627277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.627295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.627449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.627467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.627742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.627760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.627882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.627900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.628144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.628162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.628296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.628337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.628547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.628589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.628805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.628846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.629073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.629113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.629326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.629370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.629613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.629631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.629801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.629820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.630004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.630023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.630211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.352 [2024-06-10 12:18:03.630251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.352 qpair failed and we were unable to recover it. 00:29:14.352 [2024-06-10 12:18:03.630531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.630573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.630847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.630887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.631058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.631097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.631250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.631291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.631614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.631654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.631958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.631999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.632144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.632184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.632424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.632442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.632738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.632757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.632916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.632933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.633110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.633128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.633287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.633306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.633436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.633454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.633719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.633737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.633928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.633968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.634140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.634180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.634457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.634511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.634733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.634774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.634985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.635025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.635332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.635372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.635610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.635653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.635929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.635968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.636185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.636203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.636465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.636514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.636763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.636809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.637132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.637172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.637507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.637549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.637722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.637763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.638062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.638103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.638387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.638428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.638662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.638703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.638958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.638998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.639269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.639287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.639383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.639399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.639574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.639616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.639867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.639906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.640047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.640094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.640186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.640203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.640389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.640430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.640654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.640695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.640996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.641037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.641270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.641310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.641632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.641674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.641852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.353 [2024-06-10 12:18:03.641892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.353 qpair failed and we were unable to recover it. 00:29:14.353 [2024-06-10 12:18:03.642113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.642154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.642425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.642442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.642549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.642566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.642671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.642687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.642841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.642858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.643086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.643127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.643375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.643415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.643689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.643771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.644008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.644053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.644332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.644375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.644623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.644637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.644852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.644866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.645047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.645060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.645229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.645269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.645436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.645487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.645642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.645683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.645847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.645888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.646127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.646167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.646465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.646518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.646753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.646793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.647090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.647131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.647320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.647333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.647495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.647509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.647764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.647805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.648007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.648047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.648328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.648379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.648479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.648492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.648729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.648742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.648891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.648905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.649874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.649886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.650928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.650940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.354 [2024-06-10 12:18:03.651096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.354 [2024-06-10 12:18:03.651108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.354 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.651255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.651268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.651435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.651448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.651764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.651779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.651949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.651962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.652973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.652987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.653204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.653218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.653485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.653498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.653659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.653673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.653773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.653785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.653948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.653961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.654199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.654212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.654380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.654393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.654635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.654649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.654797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.654810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.654917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.654929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.655075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.655088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.655314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.655327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.655485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.655499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.655733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.655747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.655982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.655995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.656248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.656261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.656500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.656513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.656774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.656787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.656971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.656984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.657218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.657231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.657330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.657342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.657501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.657515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.657726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.657740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.657889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.657902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.658061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.658074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.658304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.658317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.658533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.658546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.658722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.658736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.658916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.658929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.659040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.659052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.659245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.659258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.659494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.659510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.659660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.659673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.355 [2024-06-10 12:18:03.659888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.355 [2024-06-10 12:18:03.659901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.355 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.660940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.660953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.661164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.661178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.661279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.661292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.661444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.661458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.661678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.661692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.661842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.661856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.662019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.662032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.662269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.662282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.662442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.662455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.662705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.662719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.662960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.662973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.663132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.663146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.663383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.663396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.663562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.663576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.663857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.663871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.664088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.664101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.664250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.664263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.664505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.664518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.664755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.664769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.664868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.664881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.664981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.664995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.665160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.665173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.665321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.665334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.665425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.665437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.665536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.665549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.665706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.665720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.665814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.665827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.666080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.666093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.666286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.666299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.666521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.666535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.666789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.666802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.666967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.666982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.356 [2024-06-10 12:18:03.667221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.356 [2024-06-10 12:18:03.667234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.356 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.667393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.667407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.667583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.667596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.667757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.667771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.667923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.667936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.668095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.668108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.668322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.668335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.668578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.668591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.668752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.668765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.668953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.668966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.669190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.669204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.669439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.669453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.669629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.669643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.669753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.669765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.669863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.669875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.669953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.669965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.670177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.670190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.670335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.670349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.670587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.670600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.670724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.670913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.670927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.671083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.671096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.671258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.671271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.671382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.671396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.671546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.671559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.671663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.671675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.671898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.671912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.672170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.672184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.672408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.672421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.672568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.672582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.672819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.672833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.673046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.673059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.673162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.673174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.673407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.673420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.673587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.673601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.673712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.673725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.673939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.673952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.674211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.674224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.674411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.674424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.674692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.674709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.674810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.674822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.674915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.674927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.675064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.675077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.675268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.357 [2024-06-10 12:18:03.675281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.357 qpair failed and we were unable to recover it. 00:29:14.357 [2024-06-10 12:18:03.675429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.675442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.675569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.675582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.675753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.675766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.675935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.675948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.676120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.676134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.676291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.676304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.676562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.676575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.676830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.676844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.676957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.676969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.677117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.677130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.677350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.677364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.677539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.677552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.677766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.677779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.678014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.678027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.678110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.678123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.678284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.678298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.678470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.678496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.678654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.678668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.678928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.678942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.679104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.679118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.679287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.679300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.679464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.679483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.679581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.679594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.679686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.679698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.679908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.679921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.680022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.680035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.680297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.680310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.680472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.680490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.680733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.680746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.680979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.680992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.681239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.681252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.681407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.681421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.681655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.681669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.681898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.681911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.682028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.682042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.682266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.682281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.682452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.682465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.682735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.682749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.682912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.682926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.683163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.683176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.683274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.683286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.683433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.683446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.683682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.683695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.683935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.683949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.358 [2024-06-10 12:18:03.684095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.358 [2024-06-10 12:18:03.684109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.358 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.684256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.684269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.684503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.684517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.684680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.684694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.684865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.684878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.685105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.685119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.685267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.685280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.685380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.685392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.685651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.685664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.685859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.685872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.686032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.686045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.686266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.686279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.686444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.686458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.686563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.686576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.686806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.686819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.687071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.687085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.687233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.687247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.687392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.687405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.687570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.687584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.687736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.687749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.687912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.687925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.688136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.688150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.688232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.688244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.688505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.688518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.688732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.688745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.688896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.688909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.688997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.689009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.689175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.689188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.689347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.689360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.689545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.689559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.689732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.689745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.689893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.689908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.690092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.690105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.690270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.690283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.690516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.690530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.690751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.690765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.690950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.690963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.691111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.691125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.691291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.691304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.691452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.691465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.691619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.691633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.691846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.691859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.691989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.692002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.692259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.692273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.359 [2024-06-10 12:18:03.692490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.359 [2024-06-10 12:18:03.692503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.359 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.692605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.692618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.692831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.692844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.693080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.693093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.693244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.693257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.693498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.693511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.693777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.693790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.694018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.694031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.694221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.694234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.694409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.694422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.694635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.694649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.694902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.694916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.695083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.695097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.695282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.695295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.695378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.695390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.695496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.695509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.695675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.695688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.695903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.695916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.696063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.696076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.696256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.696269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.696507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.696521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.696701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.696714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.696950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.696963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.697205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.697218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.697404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.697417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.697590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.697603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.697840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.697853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.697951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.697966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.698126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.698139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.698354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.698368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.698470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.698486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.698600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.698614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.698843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.698856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.699116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.699129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.699354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.699367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.699609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.699622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.699811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.699824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.700040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.700054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.700291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.700304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.700454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.700467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.700572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.700584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.700809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.700822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.360 [2024-06-10 12:18:03.701083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.360 [2024-06-10 12:18:03.701096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.360 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.701249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.701263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.701513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.701526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.701755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.701768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.701934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.701947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.702065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.702078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.702184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.702197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.702433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.702446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.702603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.702617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.702834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.702848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.702996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.703009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.703159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.703172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.703397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.703411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.703626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.703640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.703854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.703867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.704037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.704050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.704284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.704298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.704511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.704525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.704802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.704815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.704977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.704991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.705254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.705267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.705366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.705378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.705535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.705549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.705725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.705738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.705855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.705868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.706084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.706099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.706359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.706373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.706541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.706554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.706673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.706686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.706899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.706912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.707974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.707986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.708231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.708244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.708420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.708434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.708631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.708644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.708869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.708883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.709102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.709116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.709402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.709415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.709610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.709624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.709884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.709897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.710155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.710168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.710425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.710439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.710666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.361 [2024-06-10 12:18:03.710680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.361 qpair failed and we were unable to recover it. 00:29:14.361 [2024-06-10 12:18:03.710917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.710931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.711191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.711204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.711435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.711448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.711628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.711641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.711790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.711804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.712064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.712077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.712264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.712277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.712554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.712567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.712668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.712680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.712917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.712930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.713194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.713207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.713441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.713454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.713723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.713736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.713950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.713964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.714063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.714075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.714235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.714248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.714420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.714433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.714580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.714597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.714836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.714849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.715037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.715050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.715337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.715351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.715458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.715471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.715622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.715636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.715849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.715863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.716023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.716036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.716255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.716268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.716434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.716447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.716617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.716631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.716796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.716809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.716966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.716979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.717092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.717105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.717275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.717289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.717527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.717542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.717792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.717805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.717900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.717913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.718130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.718143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.718293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.718306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.718455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.718468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.718622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.718635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.718854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.718868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.718990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.719094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.719342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.719461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.719642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.719785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.719965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.719978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.720224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.362 [2024-06-10 12:18:03.720238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.362 qpair failed and we were unable to recover it. 00:29:14.362 [2024-06-10 12:18:03.720359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.720373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.720562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.720604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.720826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.720867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.721055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.721096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.721376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.721417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.721701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.721743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.721958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.721999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.722296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.722337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.722619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.722633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.722865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.722880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.723005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.723018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.723182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.723222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.723467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.723517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.723751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.723791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.724098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.724139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.724381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.724421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.724754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.724767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.725005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.725018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.725200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.725213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.725318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.725331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.725480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.725493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.725675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.725688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.725876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.725917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.726151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.726191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.726465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.726482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.726661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.726702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.726910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.726950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.727191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.727232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.727543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.727585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.727807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.727820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.728006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.728047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.728344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.728384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.728602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.728616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.728728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.728741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.728978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.728992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.729181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.729195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.729458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.729510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.729816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.729857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.730124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.730165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.730455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.730506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.730765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.730805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.731037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.731078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.731315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.731356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.731574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.731616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.731824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.731865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.732043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.732086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.732383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.732396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.732643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.732685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.732913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.732954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.733289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.733334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.733583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.363 [2024-06-10 12:18:03.733597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.363 qpair failed and we were unable to recover it. 00:29:14.363 [2024-06-10 12:18:03.733721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.733735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.733905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.733918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.734052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.734092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.734374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.734415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.734669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.734712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.735017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.735057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.735353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.735366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.735539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.735553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.735710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.735750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.735982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.736022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.736253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.736293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.736529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.736542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.736720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.736761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.736917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.736957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.737172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.737213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.737440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.737454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.737747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.737789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.738004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.738045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.738344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.738386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.738633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.738675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.738847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.738887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.739165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.739206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.739507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.739521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.739698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.739711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.739912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.739925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.740039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.740052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.740210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.740223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.740411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.740424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.740607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.740657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.740888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.740929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.741210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.741252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.741551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.741593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.741758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.741771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.741936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.741948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.742095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.742108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.742322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.742335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.742460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.742512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.742745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.742785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.743065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.743111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.743320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.743361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.743665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.743678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.743848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.743861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.744035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.744048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.744295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.744336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.744556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.744598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.364 qpair failed and we were unable to recover it. 00:29:14.364 [2024-06-10 12:18:03.744772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.364 [2024-06-10 12:18:03.744785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.744953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.744999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.745216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.745256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.745471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.745526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.745638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.745651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.745875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.745915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.746076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.746117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.746434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.746485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.746719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.746761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.747004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.747044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.747346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.747387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.747595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.747638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.747907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.747920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.748037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.748077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.748304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.748344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.748597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.748639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.748855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.748896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.749116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.749157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.749457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.749499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.749713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.749726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.749904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.749918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.750128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.750142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.750337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.750378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.750622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.750663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.750894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.750935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.751178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.751219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.751522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.751563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.751795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.751837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.752091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.752133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.752351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.752392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.752625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.752667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.752825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.752866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.753030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.753071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.753366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.753412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.753680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.753694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.753934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.753947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.754135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.754177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.754456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.754507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.754752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.754766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.754892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.754933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.755217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.755258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.755472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.755490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.755656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.755696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.755974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.756015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.756233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.756273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.756517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.756559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.756790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.756831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.757012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.757053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.757314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.757354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.757637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.757679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.757854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.757894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.365 [2024-06-10 12:18:03.758213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.365 [2024-06-10 12:18:03.758254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.365 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.758530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.758572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.758777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.758792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.759050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.759065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.759331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.759372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.759656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.759698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.759908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.759921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.760099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.760113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.760227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.760240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.760444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.760503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.760671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.760711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.760864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.760905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.761216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.761256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.761470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.761492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.761733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.761747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.761970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.762011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.762233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.762273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.762515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.762528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.762703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.762744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.763068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.763108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.763384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.763425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.763628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.763669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.763853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.763894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.764185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.764226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.764487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.764528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.764673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.764687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.764843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.764884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.765162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.765203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.765494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.765536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.765722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.765763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.765970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.765983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.766154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.766167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.766332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.766345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.766497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.766511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.766726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.766739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.766907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.766921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.767090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.767131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.767346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.767387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.767574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.767615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.767774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.767787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.767958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.767998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.768220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.768260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.768522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.768564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.768840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.768881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.769097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.769138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.769360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.769401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.769632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.769673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.769883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.769896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.769992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.770004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.770172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.770187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.770361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.770401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.366 [2024-06-10 12:18:03.770608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.366 [2024-06-10 12:18:03.770650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.366 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.770882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.770923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.771256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.771297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.771615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.771629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.771800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.771815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.771992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.772032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.772259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.772300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.772631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.772645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.772758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.772772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.772969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.773010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.773266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.773306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.773625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.773639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.773742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.773754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.773987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.774027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.774273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.774314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.774595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.774638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.774801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.774842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.775023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.775064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.775360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.775401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.775597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.775638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.775841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.775855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.776037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.776077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.776316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.776356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.776586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.776600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.776721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.776735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.776973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.776987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.777174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.777187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.777371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.777412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.777678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.777720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.777952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.777993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.778235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.778276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.778582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.778624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.778787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.778827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.779118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.779157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.779383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.779424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.779708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.779721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.779836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.779849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.779969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.780003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.780310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.780356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.780617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.780631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.780745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.780757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.780915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.780928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.781123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.781136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.781350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.781363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.781560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.781573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.781697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.781711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.781843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.781884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.782061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.782102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.782329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.782370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.367 [2024-06-10 12:18:03.782578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.367 [2024-06-10 12:18:03.782592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.367 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.782814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.782855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.783011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.783051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.783298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.783339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.783557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.783572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.783675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.783688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.783852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.783865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.784037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.784050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.784317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.784358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.784578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.784620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.784948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.784989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.785147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.785189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.785512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.785554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.785783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.785824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.786000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.786041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.786267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.786308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.786629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.786671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.786888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.786902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.787067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.787080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.787272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.787301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.787579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.787593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.787693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.787706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.787883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.787897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.788013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.788027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.788288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.788302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.788397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.788410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.788509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.788522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.788690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.788704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.788823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.788836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.789017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.789063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.789359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.789400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.789645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.789659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.789882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.789895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.790113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.790126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.790297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.790311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.790551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.790565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.790730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.790744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.790957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.790971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.791258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.791272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.791441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.791454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.368 qpair failed and we were unable to recover it. 00:29:14.368 [2024-06-10 12:18:03.791659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.368 [2024-06-10 12:18:03.791673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.652 qpair failed and we were unable to recover it. 00:29:14.652 [2024-06-10 12:18:03.791845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.652 [2024-06-10 12:18:03.791858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.652 qpair failed and we were unable to recover it. 00:29:14.652 [2024-06-10 12:18:03.792111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.652 [2024-06-10 12:18:03.792124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.652 qpair failed and we were unable to recover it. 00:29:14.652 [2024-06-10 12:18:03.792225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.652 [2024-06-10 12:18:03.792238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.652 qpair failed and we were unable to recover it. 00:29:14.652 [2024-06-10 12:18:03.792493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.652 [2024-06-10 12:18:03.792507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.652 qpair failed and we were unable to recover it. 00:29:14.652 [2024-06-10 12:18:03.792688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.792702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.792878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.792892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.793197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.793212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.793362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.793376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.793537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.793552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.793704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.793717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.793818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.793831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.794013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.794026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.794304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.794318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.794494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.794508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.794612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.794625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.794800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.794814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.794928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.794942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.795136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.795150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.795310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.795324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.795431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.795445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.795618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.795633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.795832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.795846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.796064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.796078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.796240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.796254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.796470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.796488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.796667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.796681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.796835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.796848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.797011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.797024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.797274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.797291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.797472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.797491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.797631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.797645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.797821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.797835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.798000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.798014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.798251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.798264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.798431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.798445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.798612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.798626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.798799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.798813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.798923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.798937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.799041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.799053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.799295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.799309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.799457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.799471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.799700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.653 [2024-06-10 12:18:03.799714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.653 qpair failed and we were unable to recover it. 00:29:14.653 [2024-06-10 12:18:03.799840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.799854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.799950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.799963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.800077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.800090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.800196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.800208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.800439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.800513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.800758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.800799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.801031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.801072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.801317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.801359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.801597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.801638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.801843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.801857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.801979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.802021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.802255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.802296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.802602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.802644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.802824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.802864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.803098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.803139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.803359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.803400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.803659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.803711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.803882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.803895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.804061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.804075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.804270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.804311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.804535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.804576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.804807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.804849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.805029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.805069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.805347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.805389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.805560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.805602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.805893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.805934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.806113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.806160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.806371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.806412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.806583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.806608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.806756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.806770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.806954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.806995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.807311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.807352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.807563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.807577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.807736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.807776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.807985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.808025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.808275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.808316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.808627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.654 [2024-06-10 12:18:03.808669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.654 qpair failed and we were unable to recover it. 00:29:14.654 [2024-06-10 12:18:03.808828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.808841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.809012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.809036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.809150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.809163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.809330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.809343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.809586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.809599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.809720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.809761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.810039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.810080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.810401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.810442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.810687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.810728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.810946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.810960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.811138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.811152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.811350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.811390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.811619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.811633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.811756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.811770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.811931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.811945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.812124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.812138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.812239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.812252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.812424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.812438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.812616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.812631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.812773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.812814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.813047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.813088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.813373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.813414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.813680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.813722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.814017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.814030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.814208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.814222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.814439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.814453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.814641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.814655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.814926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.814967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.815252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.815293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.815620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.815667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.815848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.815888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.816029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.816042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.816235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.816248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.816404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.816418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.816589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.816602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.816765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.816805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.817035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.817076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.817354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.655 [2024-06-10 12:18:03.817394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.655 qpair failed and we were unable to recover it. 00:29:14.655 [2024-06-10 12:18:03.817701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.817743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.818051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.818092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.818320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.818362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.818588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.818601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.818761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.818774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.818862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.818875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.818987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.819000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.819235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.819249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.819401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.819414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.819655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.819669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.819795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.819836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.820156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.820197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.820524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.820567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.820849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.820890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.821212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.821253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.821497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.821539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.821773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.821814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.822037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.822077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.822368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.822409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.822668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.822710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.822991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.823033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.823335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.823377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.823552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.823567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.823711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.823752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.824001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.824042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.824338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.824380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.824669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.824711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.824869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.824910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.825234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.825275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.825584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.825626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.825820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.825832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.825999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.826015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.826275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.826317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.826586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.826627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.826854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.826868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.827086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.827099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.827229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.827270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.656 qpair failed and we were unable to recover it. 00:29:14.656 [2024-06-10 12:18:03.827582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.656 [2024-06-10 12:18:03.827624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.827825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.827867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.828131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.828173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.828352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.828393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.828710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.828725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.828838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.828851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.828950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.828963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.829069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.829081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.829345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.829386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.829716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.829759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.829979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.829993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.830197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.830210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.830395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.830436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.830703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.830744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.830905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.830919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.831090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.831104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.831401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.831443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.831748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.831789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.831965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.832006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.832249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.832290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.832462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.832517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.832808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.832855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.833021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.833035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.833227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.833240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.833493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.833507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.833620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.833633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.833741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.833753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.833859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.833872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.834102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.834143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.834300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.834341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.834517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.657 [2024-06-10 12:18:03.834558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.657 qpair failed and we were unable to recover it. 00:29:14.657 [2024-06-10 12:18:03.834771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.834784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.834988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.835029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.835258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.835300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.835620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.835667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.835895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.835937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.836138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.836179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.836491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.836522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.836639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.836652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.836768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.836802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.836972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.837013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.837263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.837304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.837515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.837556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.837809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.837850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.838089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.838103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.838347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.838361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.838650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.838693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.839016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.839057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.839366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.839407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.839664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.839706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.839886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.839899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.840016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.840040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.840297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.840311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.840499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.840541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.840776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.840817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.840994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.841008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.841279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.841320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.841534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.841576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.841799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.841839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.842153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.842201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.842531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.842574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.842767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.842781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.842949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.842963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.843238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.843280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.843509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.843550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.843776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.843817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.844040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.844054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.844305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.844319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.844491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-06-10 12:18:03.844533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-06-10 12:18:03.844833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.844875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.845156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.845197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.845517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.845560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.845731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.845772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.845937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.845950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.846214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.846261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.846498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.846540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.846789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.846803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.846980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.847021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.847311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.847352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.847586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.847600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.847759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.847772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.847874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.847886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.848063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.848077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.848355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.848396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.848604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.848618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.848770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.848809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.849041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.849082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.849367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.849408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.849649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.849690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.849996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.850038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.850346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.850387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.850635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.850650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.850883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.850924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.851223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.851264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.851545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.851587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.851858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.851872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.851985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.851998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.852222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.852263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.852506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.852548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.852778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.852819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.853020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.853035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.853212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.853253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.853471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-06-10 12:18:03.853524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-06-10 12:18:03.853746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.853761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.853937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.853978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.854347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.854389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.854689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.854702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.854884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.854925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.855108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.855149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.855459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.855517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.855716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.855730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.855859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.855873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.856045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.856059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.856319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.856360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.856656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.856704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.856895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.856909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.857083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.857098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.857300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.857340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.857644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.857686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.857918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.857933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.858232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.858273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.858560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.858601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.858736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.858748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.858859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.858872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.859159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.859200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.859368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.859408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.859684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.859726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.859945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.859959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.860165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.860206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.860521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.860564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.860735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.860749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.860992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.861033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.861329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.861370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.861653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.861668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-06-10 12:18:03.861880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-06-10 12:18:03.861922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.862274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.862315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.862597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.862639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.862946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.862987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.863168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.863209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.863519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.863561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.863860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.863894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.864330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.864412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.864693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.864739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.864967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.864986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.865249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.865289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.865593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.865636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.865887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.865905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.866014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.866032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.866157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.866196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.866529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.866571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.866794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.866835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.867005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.867046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.867331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.867371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.867604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.867646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.867797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.867815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.868077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.868095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.868276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.868294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.868437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.868487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.868727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.868776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.869013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.869054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.869370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.869411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.869703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.869745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.870027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.870068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.870317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.870358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.870674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.870716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.870871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.870913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.871268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.871308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.871634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.871675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.871911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.871966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.872145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.872185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-06-10 12:18:03.872422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-06-10 12:18:03.872463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.872736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.872754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.872871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.872889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.873060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.873079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.873242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.873260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.873522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.873576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.873815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.873856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.874083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.874124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.874462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.874517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.874745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.874786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.874971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.875012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.875239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.875279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.875565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.875606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.875774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.875815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.876031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.876071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.876371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.876412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.876635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.876677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.876905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.876924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.877125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.877165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.877423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.877464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.877779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.877821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.878057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.878098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.878288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.878306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.878568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.878609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-06-10 12:18:03.878797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.662 [2024-06-10 12:18:03.878838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.879126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.879144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.879382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.879401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.879633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.879652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.879885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.879904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.880092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.880110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.880358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.880393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.880619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.880661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.880905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.880945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.881272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.881290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.881424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.881442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.881681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.881700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.881890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.881930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.882178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.882220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.882449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.882502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.882792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.882811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.882994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.883013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.883235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.883254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.883435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.883492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.883667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.883686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.883877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.883895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.884114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.884134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.884424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.884464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.884716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.884758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.885043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.885084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.885385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.885426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.885661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.885703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.885942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.885982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.886213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.886254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.886475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.886532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.886821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.886857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.887149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.887190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.887508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.887550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.887798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.887838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.888076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.888116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-06-10 12:18:03.888285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.663 [2024-06-10 12:18:03.888343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.888596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.888638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.888856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.888874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.889062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.889103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.889412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.889453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.889833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.889872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.890125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.890144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.890377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.890398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.890683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.890702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.890816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.890835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.891061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.891101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.891411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.891452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.891706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.891748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.892045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.892080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.892334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.892375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.892642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.892684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.892835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.892854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.893029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.893047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.893259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.893299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.893590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.893631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.893902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.893921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.894159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.894178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.894464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.894519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.894736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.894777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.895063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.895082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.895259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.895277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.895494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.895537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.895776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.895816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.896064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.896104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.896387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.896427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.896708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.896751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.896989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.897030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.897280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.897299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.897560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.664 [2024-06-10 12:18:03.897579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.664 qpair failed and we were unable to recover it. 00:29:14.664 [2024-06-10 12:18:03.897767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.897786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.897999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.898018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.898318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.898358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.898622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.898664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.898830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.898848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.899032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.899050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.899184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.899202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.899376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.899394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.899573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.899592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.899867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.899886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.900020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.900038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.900296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.900315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.900555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.900574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.900705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.900723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.900863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.900885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.901066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.901084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.901349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.901368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.901574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.901595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.901791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.901817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.901988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.902006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.902205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.902224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.902353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.902372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.902571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.902591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.902854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.902874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.903057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.903077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.903357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.903377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.903544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.903563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.903744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.903762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.903881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.903898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.904138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.904157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.904330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.904349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.904553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.904573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.904692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.904710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.904843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.904860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.904956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.904973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.905183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.665 [2024-06-10 12:18:03.905203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.665 qpair failed and we were unable to recover it. 00:29:14.665 [2024-06-10 12:18:03.905368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.905387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.905527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.905546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.905736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.905755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.905879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.905897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.906180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.906200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.906456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.906486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.906634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.906652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.906841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.906860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.907046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.907065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.907313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.907332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.907519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.907538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.907702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.907721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.907906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.907924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.908199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.908217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.908490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.908510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.908649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.908668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.908804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.908823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.909008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.909028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.909304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.909323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.909434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.909451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.909649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.909669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.909800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.909818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.909984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.910003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.910110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.910127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.910332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.910354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.910534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.910554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.910670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.910688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.910944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.910962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.911162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.911181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.911363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.666 [2024-06-10 12:18:03.911382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.666 qpair failed and we were unable to recover it. 00:29:14.666 [2024-06-10 12:18:03.911557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.911576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.911779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.911797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.912003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.912021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.912327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.912346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.912520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.912539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.912660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.912683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.912813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.912831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.913114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.913132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.913307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.913325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.913610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.913628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.913813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.913830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.914010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.914029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.914133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.914152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.914388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.914406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.914581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.914600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.914762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.914784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.914973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.914996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.915216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.915236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.915524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.915543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.915733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.915753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.915940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.915958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.916201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.916219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.916485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.916506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.916763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.916783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.916951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.916969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.917148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.917170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.917411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.917430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.917595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.917614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.917797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.917816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.918075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.918094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.667 [2024-06-10 12:18:03.918340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.667 [2024-06-10 12:18:03.918360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.667 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.918547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.918566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.918752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.918770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.919006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.919024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.919249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.919267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.919443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.919462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.919753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.919788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.919999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.920015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.920193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.920207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.920465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.920486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.920663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.920677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.920875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.920889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.921027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.921041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.921254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.921272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.921519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.921534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.921800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.921814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.921987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.922001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.922190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.922204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.922328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.922342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.922508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.922523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.922697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.922712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.922938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.922953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.923137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.923151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.923321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.923335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.923610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.923624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.923868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.923882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.924134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.924148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.924368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.924382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.924635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.924650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.924769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.924783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.924960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.924974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.925175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.925189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.925342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.925356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.925518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.925533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.925730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.925745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.925920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.925934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.926163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.926177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.926349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.926363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.668 [2024-06-10 12:18:03.926612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.668 [2024-06-10 12:18:03.926626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.668 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.926715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.926727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.926848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.926861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.926999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.927012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.927141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.927156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.927275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.927289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.927559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.927573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.927809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.927823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.927934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.927948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.928138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.928152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.928400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.928414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.928609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.928624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.928804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.928818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.928997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.929011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.929138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.929152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.929409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.929425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.929610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.929624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.929783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.929798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.929908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.929921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.930125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.930140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.930316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.930330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.930499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.930514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.930759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.930773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.930923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.930937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.931135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.931149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.931401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.931415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.931538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.931550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.931769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.931783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.931964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.931979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.932166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.932180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.932335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.932349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.932458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.932472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.932634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.932649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.932868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.932882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.933034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.933048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.933336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.933350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.933634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.933648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.933803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.933818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.934011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.669 [2024-06-10 12:18:03.934024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.669 qpair failed and we were unable to recover it. 00:29:14.669 [2024-06-10 12:18:03.934278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.934292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.934489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.934503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.934718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.934732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.934919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.934933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.935106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.935120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.935299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.935313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.935544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.935558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.935741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.935755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.935940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.935954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.936127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.936140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.936326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.936340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.936496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.936510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.936683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.936697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.936873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.936887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.936975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.936988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.937111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.937126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.937295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.937311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.937547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.937561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.937682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.937696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.937939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.937952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.938077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.938091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.938265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.938278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.938431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.938445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.938615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.938629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.938839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.938853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.939005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.939019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.939203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.939217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.939369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.939383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.939609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.939623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.939803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.939817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.940063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.940076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.940326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.940340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.940621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.940636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.940805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.940819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.940990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.941004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.941277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.941291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.941458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.941472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.941677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.941692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.670 [2024-06-10 12:18:03.941867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.670 [2024-06-10 12:18:03.941881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.670 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.942053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.942066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.942239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.942253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.942350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.942363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.942568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.942582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.942744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.942757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.942929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.942943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.943103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.943117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.943385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.943399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.943577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.943591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.943831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.943844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.943966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.943980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.944171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.944185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.944354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.944368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.944531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.944545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.944640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.944653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.944889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.944903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.945020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.945034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.945139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.945154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.945257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.945270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.945521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.945534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.945722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.945736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.945911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.945925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.946094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.946108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.946324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.946338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.946613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.946627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.946802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.946816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.946916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.946930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.947102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.947115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.947277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.947291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.947462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.947486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.947607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.947621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.947740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.947753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.947904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.947918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.948089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.948103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.948199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.948212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.948404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.948417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.948635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.948650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.948773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.948786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.671 qpair failed and we were unable to recover it. 00:29:14.671 [2024-06-10 12:18:03.948896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.671 [2024-06-10 12:18:03.948910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.949154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.949168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.949276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.949288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.949441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.949456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.949659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.949673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.949769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.949782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.949972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.949986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.950236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.950250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.950421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.950435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.950609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.950623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.950745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.950760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.951010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.951024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.951321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.951334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.951565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.951685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.951700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.951889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.951903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.952028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.952042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.952236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.952249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.952417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.952430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.952592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.952608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.952712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.952726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.952895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.952909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.953110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.953123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.953235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.953249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.953370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.953384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.953632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.953646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.953814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.953827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.953933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.953947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.954126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.954140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.954330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.954344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.954443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.954456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.954575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.672 [2024-06-10 12:18:03.954587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.672 qpair failed and we were unable to recover it. 00:29:14.672 [2024-06-10 12:18:03.954807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.954820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.954942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.954955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.955182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.955195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.955373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.955386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.955495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.955508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.955662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.955676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.955832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.955846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.955962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.955976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.956145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.956159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.956325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.956339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.956565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.956581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.956699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.956713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.956931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.956945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.957115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.957129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.957405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.957445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.957593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.957632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.957831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.957851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.957980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.957998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.958212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.958229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.958408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.958426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.958607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.958625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.958753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.958771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.958943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.958961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.959250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.959268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.959536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.959554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.959683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.959701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.959814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.959829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.960071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.960087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.960302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.960316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.960591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.960605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.960708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.960721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.960895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.960908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.961029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.961043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.961240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.961253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.961439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.961453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.961739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.961753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.961936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.961950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.962160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.962173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.673 qpair failed and we were unable to recover it. 00:29:14.673 [2024-06-10 12:18:03.962394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.673 [2024-06-10 12:18:03.962407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.962642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.962656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.962875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.962889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.963000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.963013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.963130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.963143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.963297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.963311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.963534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.963549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.963761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.963774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.963943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.963957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.964122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.964136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.964353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.964367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.964448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.964461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.964577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.964591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.964720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.964734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.964903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.964916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.965020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.965032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.965238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.965252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.965513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.965527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.965723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.965737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.965834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.965847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.966009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.966022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.966292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.966305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.966460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.966473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.966592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.966606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.966803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.966817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.966984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.966998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.967222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.967235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.967348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.967362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.967581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.967595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.967765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.967780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.967934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.967947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.968127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.968140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.968357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.968371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.968459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.968472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.968615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.968631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.968885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.968898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.969014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.969028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.969232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.674 [2024-06-10 12:18:03.969246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.674 qpair failed and we were unable to recover it. 00:29:14.674 [2024-06-10 12:18:03.969365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.969378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.969575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.969589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.969682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.969696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.969791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.969805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.969907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.969921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.970035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.970050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.970217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.970230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.970376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.970391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.970620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.970634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.970803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.970817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.970979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.970992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.971191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.971205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.971444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.971457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.971627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.971641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.971834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.971848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.972938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.972953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.973074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.973266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.973470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.973621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.973756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.973871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.973992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.974112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.974304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.974490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.974619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.974730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.974937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.974950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.975058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.975071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.975309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.675 [2024-06-10 12:18:03.975322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.675 qpair failed and we were unable to recover it. 00:29:14.675 [2024-06-10 12:18:03.975512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.975525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.975705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.975718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.975843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.975856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.975969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.975982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.976157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.976170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.976261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.976274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.976457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.976471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.976693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.976707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.976861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.976875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.977066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.977079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.977236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.977249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.977395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.977411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.977585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.977599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.977769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.977783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.977908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.977922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.978082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.978095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.978349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.978363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.978629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.978643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.978860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.978873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.979140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.979153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.979314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.979327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.979495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.979511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.979687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.979701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.979914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.979927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.980038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.980051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.980228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.980242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.980411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.980425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.980544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.980558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.980751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.980765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.980886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.980900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.981009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.981022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.981429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.981451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.981636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.981651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.981832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.981845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.981957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.676 [2024-06-10 12:18:03.981971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.676 qpair failed and we were unable to recover it. 00:29:14.676 [2024-06-10 12:18:03.982273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.982286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.982469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.982488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.982711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.982725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.982827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.982840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.983002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.983016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.983245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.983260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.983418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.983432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.983633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.983647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.983886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.983900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.984890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.984903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.985957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.985970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.986169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.986183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.986359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.986373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.986567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.986581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.986777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.986792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.986983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.986997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.987208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.987222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.987403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.987416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.987527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.987540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.987693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.987707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.987873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.987886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.987996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.988010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.677 [2024-06-10 12:18:03.988116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.677 [2024-06-10 12:18:03.988129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.677 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.988358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.988372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.988568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.988582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.988760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.988773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.988968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.988982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.989163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.989176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.989349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.989362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.989472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.989498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.989648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.989662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.989781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.989794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.989912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.989924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.990017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.990030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.990152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.990165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.990333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.990346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.990514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.990528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.990693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.990707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.990820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.990833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.991017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.991031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.991212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.991226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.991325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.991336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.991502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.991515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.991599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.991610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.991825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.991839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.992000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.992013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.992195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.992209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.992376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.992389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.992486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.992499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.992684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.992697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.992860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.992874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.993093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.993107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.993199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.993211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.993362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.993375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.993636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.993652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.993834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.993848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.994063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.994076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.994251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.994264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.994482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.994496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.994616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.994629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.994723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.678 [2024-06-10 12:18:03.994737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.678 qpair failed and we were unable to recover it. 00:29:14.678 [2024-06-10 12:18:03.994890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.994905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.995075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.995088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.995321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.995335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.995511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.995524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.995739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.995753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.995851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.995864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.995971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.995985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.996256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.996270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.996505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.996520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.996627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.996641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.996789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.996803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.996950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.996963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.997085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.997098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.997262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.997275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.997497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.997511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.997678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.997693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.997781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.997793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.998028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.998041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.998339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.998353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.998622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.998635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.998805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.998818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.998918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.998930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.999112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.999125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.999292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.999306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.999543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.999556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.999714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.999727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.999830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:03.999843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:03.999998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.000011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.000278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.000291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.000474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.000498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.000700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.000713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.000824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.000838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.000940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.000954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.001111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.001126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.001279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.001292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.001518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.001531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.001752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.001765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.001937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.679 [2024-06-10 12:18:04.001950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.679 qpair failed and we were unable to recover it. 00:29:14.679 [2024-06-10 12:18:04.002039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.002052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.002178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.002191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.002356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.002369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.002584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.002597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.002842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.002856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.002975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.002989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.003173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.003186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.003354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.003367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.003496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.003509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.003659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.003673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.003789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.003802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.003972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.003985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.004157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.004170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.004383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.004397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.004573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.004586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.004783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.004796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.004880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.004892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.005123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.005136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.005293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.005306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.005503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.005517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.005619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.005632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.005798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.005811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.005927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.005940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.006034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.006047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.006308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.006321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.006481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.006495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.006623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.006636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.006852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.006865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.007059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.007072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.007266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.007280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.007464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.007482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.007655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.007668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.007764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.007776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.007946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.007959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.008211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.008224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.008462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.008481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.680 [2024-06-10 12:18:04.008653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.680 [2024-06-10 12:18:04.008667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.680 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.008757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.008771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.008899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.008913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.009078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.009092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.009256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.009269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.009431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.009444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.009558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.009572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.009690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.009704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.009864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.009878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.010050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.010063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.010307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.010320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.010435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.010449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.010563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.010577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.010772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.010785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.010946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.010959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.011157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.011170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.011350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.011364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.011492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.011506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.011652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.011666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.012473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.012503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.012709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.012722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.012893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.012906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.013021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.013035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.013281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.013294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.013519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.013534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.013752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.013766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.013986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.013999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.014193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.014206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.014439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.014453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.014644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.014658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.014781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.014795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.681 [2024-06-10 12:18:04.014898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.681 [2024-06-10 12:18:04.014912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.681 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.015116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.015129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.015302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.015316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.015468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.015492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.015660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.015674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.015908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.015921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.016131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.016144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.016313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.016326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.016547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.016563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.016720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.016733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.016994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.017007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.017185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.017198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.017571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.017592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.017769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.017783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.018021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.018034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.018223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.018236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.018474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.018494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.018608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.018622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.018790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.018804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.018953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.018966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.019077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.019090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.019334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.019348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.019516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.019530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.019767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.019781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.020564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.020587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.020838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.020852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.021022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.021036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.021202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.021216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.021383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.021396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.021626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.021639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.021758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.021771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.021865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.021878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.022036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.022049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.022296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.022310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.022577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.022590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.022708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.022722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.022881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.022894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.023057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.023070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.682 [2024-06-10 12:18:04.023232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.682 [2024-06-10 12:18:04.023245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.682 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.023401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.023415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.023533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.023547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.023729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.023742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.023933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.023946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.024081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.024094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.024319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.024333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.024551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.024565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.024726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.024740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.024904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.024917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.025033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.025049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.025234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.025247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.025395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.025409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.025562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.025576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.025743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.025756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.026003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.026016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.026099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.026112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.026327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.026340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.026502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.026516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.026668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.026681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.026849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.026863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.027039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.027052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.027146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.027158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.027418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.027431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.027627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.027640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.027743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.027755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.027900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.027913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.028198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.028211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.028431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.028445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.028638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.028653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.028751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.028764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.028936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.028949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.029097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.029110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.029196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.029208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.029446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.029459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.029639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.029653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.029813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.029827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.029936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.029950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.683 [2024-06-10 12:18:04.030045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.683 [2024-06-10 12:18:04.030057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.683 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.030288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.030301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.030404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.030416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.030572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.030586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.030689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.030702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.030800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.030812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.030974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.030987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.031223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.031236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.031396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.031409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.031586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.031600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.031753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.031766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.031916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.031929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.032131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.032146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.032360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.032374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.032606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.032620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.032774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.032787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.033031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.033044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.033224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.033237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.033480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.033493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.033647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.033661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.033782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.033795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.033914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.033927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.034027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.034040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.034196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.034209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.034279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.034292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.034495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.034509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.034680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.034693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.034806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.034827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.035019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.035032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.035137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.035151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.035315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.035328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.035423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.035437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.035659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.035672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.035829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.035842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.036009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.036022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.036129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.036141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.684 qpair failed and we were unable to recover it. 00:29:14.684 [2024-06-10 12:18:04.036303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.684 [2024-06-10 12:18:04.036315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.036419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.036432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.036595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.036609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.036784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.036797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.036892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.036904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.036994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.037167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.037362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.037613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.037708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.037822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.037914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.037927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.038144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.038157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.038249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.038261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.038405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.038418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.038519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.038533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.038700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.038715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.038812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.038825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.039932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.039945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.040900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.040914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.041085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.041098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.041187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.041199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.685 [2024-06-10 12:18:04.041297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.685 [2024-06-10 12:18:04.041310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.685 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.041465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.041483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.041610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.041623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.041857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.041870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.042930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.042944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.043971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.043983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.044973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.044986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.045138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.045152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.045240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.045253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.045492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.045506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.045740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.045753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.045841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.045854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.045948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.045961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.046051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.046063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.046294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.686 [2024-06-10 12:18:04.046308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.686 qpair failed and we were unable to recover it. 00:29:14.686 [2024-06-10 12:18:04.046383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.046395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.046554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.046567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.046724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.046737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.046819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.046832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.046913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.046926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.047105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.047119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.047224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.047237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.047394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.047408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.047555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.047569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.047744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.047757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.047907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.047920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.048051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.048065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.048159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.048171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.048261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.048274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.048509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.048522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.048680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.048693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.048855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.048869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.049980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.049992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.050094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.050108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.050210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.050223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.050462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.050485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.050641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.050655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.050734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.050747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.050846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.050859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.051922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.051936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.052035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.052049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.052210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.052223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.052439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.052452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.052566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.052580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.687 qpair failed and we were unable to recover it. 00:29:14.687 [2024-06-10 12:18:04.052652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.687 [2024-06-10 12:18:04.052664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.052823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.052836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.052932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.052946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.053963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.053976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.054899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.054914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.055957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.055970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.056128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.056331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.056527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.056633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.056719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.056906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.056996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.057836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.057995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.688 [2024-06-10 12:18:04.058009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.688 qpair failed and we were unable to recover it. 00:29:14.688 [2024-06-10 12:18:04.058147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.058306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.058473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.058604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.058713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.058820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.058970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.058983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.059080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.059093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.059179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.059192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.059286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.059299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.059400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.059412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.059503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ab50 is same with the state(5) to be set 00:29:14.689 [2024-06-10 12:18:04.059708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.059744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.059928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.059947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.060864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.060876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.061970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.061983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.062922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.062934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.063902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.689 [2024-06-10 12:18:04.063915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.689 qpair failed and we were unable to recover it. 00:29:14.689 [2024-06-10 12:18:04.064076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.064840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.064998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.065128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.065306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.065544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.065658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.065816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.065976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.065989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.066899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.066995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.067155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.067405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.067497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.067681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.067788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.067897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.067910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.068844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.068994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.069007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.069152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.069164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.069379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.069393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.069503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.690 [2024-06-10 12:18:04.069517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.690 qpair failed and we were unable to recover it. 00:29:14.690 [2024-06-10 12:18:04.069610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.069622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.069791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.069804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.069897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.069910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.070932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.070946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.071930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.071944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.072983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.072996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.073847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.073860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.074907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.074920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.075014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.075027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.075175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.075187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.075267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.075281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.075462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.691 [2024-06-10 12:18:04.075474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.691 qpair failed and we were unable to recover it. 00:29:14.691 [2024-06-10 12:18:04.075581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.075594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.075675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.075688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.075773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.075786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.075904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.075918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.075990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.076101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.076266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.076535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.076699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.076812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.076970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.076984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.077945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.077958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.078948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.078961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.079901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.079915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.080897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.080910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.081011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.081024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.081103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.081116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.081267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.081280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.081366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.081379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.081483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.692 [2024-06-10 12:18:04.081496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.692 qpair failed and we were unable to recover it. 00:29:14.692 [2024-06-10 12:18:04.081640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.081654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.081747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.081760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.081982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.081995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.082923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.082937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.083898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.083990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.084954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.084966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.085843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.085856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.086927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.086940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.087020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.087033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.087126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.087140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.087221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.693 [2024-06-10 12:18:04.087233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.693 qpair failed and we were unable to recover it. 00:29:14.693 [2024-06-10 12:18:04.087323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.087336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.087502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.087515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.087587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.087599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.087762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.087775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.087875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.087889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.088067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.088080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.088232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.088246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.088407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.088420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.088591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.088605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.088756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.088768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.088848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.088861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.089958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.089974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.090952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.090965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.091958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.091970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.092138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.092151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.092367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.092380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.694 qpair failed and we were unable to recover it. 00:29:14.694 [2024-06-10 12:18:04.092451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.694 [2024-06-10 12:18:04.092463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.092560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.092574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.092676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.092688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.092786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.092799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.092893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.092906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.092986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.092997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.093823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.093836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.094897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.094911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.095794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.095808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.096046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.096059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.096141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.096154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.096272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.096286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.096373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.096386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.695 [2024-06-10 12:18:04.096466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.695 [2024-06-10 12:18:04.096483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.695 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.096578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.096592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.096743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.096756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.096848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.096861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.096970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.096983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.097832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.097845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.098895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.098908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.099893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.099906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.696 [2024-06-10 12:18:04.100737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.696 [2024-06-10 12:18:04.100750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.696 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.100916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.100930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.101933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.101946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.102864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.102877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.103982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.103995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.104148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.104161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.104318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.104332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.104413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.104426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.104586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.104600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.104770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.104783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.104940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.104952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.105099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.105112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.105301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.105314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.697 [2024-06-10 12:18:04.105529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.697 [2024-06-10 12:18:04.105543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.697 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.105764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.105777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.105891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.105904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.105973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.105985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.106235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.106249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.106333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.106346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.106588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.106602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.106761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.106774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.106874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.106886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.106984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.106998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.107171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.107184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.107342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.107355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.107524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.107537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.107641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.107654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.107737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.107750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.107966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.107979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.108916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.108928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.109076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.109089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.109327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.109340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.109430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.109442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.109676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.109690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.109838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.109853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.698 qpair failed and we were unable to recover it. 00:29:14.698 [2024-06-10 12:18:04.110809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.698 [2024-06-10 12:18:04.110822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.110904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.110917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.111909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.111993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.112152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.112248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.112443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.112622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.112789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.112961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.112974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.113138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.113151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.113254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.113267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.113439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.113452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.113551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.113565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.113660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.113673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.113856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.113869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.114033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.114046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.114194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.114207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.114355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.114368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.114533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.114546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.114699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.114711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.699 [2024-06-10 12:18:04.114801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.699 [2024-06-10 12:18:04.114815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.699 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.114926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.114940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.115045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.115058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.115274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.115287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.115447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.115459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.115623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.115636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.115795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.115810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.115905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.115918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.116180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.116194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.116345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.116358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.116516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.116529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.116696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.116710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.116874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.116887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.116973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.116986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.117197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.117211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.117308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.117321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.117431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.117444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.117596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.117609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.117710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.117722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.700 [2024-06-10 12:18:04.117815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.700 [2024-06-10 12:18:04.117829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.700 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.117991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.118098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.118261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.118425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.118588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.118701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.118929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.118942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.119043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.119056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.119175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.119188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.119403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.119416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.119568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.119582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.119752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.119765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.119870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.119883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.120906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.120919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.121072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.121085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.121187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.121200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.121298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.121312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.121404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.121417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.701 qpair failed and we were unable to recover it. 00:29:14.701 [2024-06-10 12:18:04.121564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.701 [2024-06-10 12:18:04.121578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.121670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.121683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.121831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.121846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.121999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.122159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.122318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.122420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.122539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.122696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.122861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.122874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.123049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.123063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.123189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.123202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.123440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.123453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.123625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.123639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.123803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.123816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.123903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.123917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.124080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.124093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.124306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.124319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.124504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.124518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.124731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.124745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.124886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.124899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.124970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.124983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.125198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.125211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.125370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.125383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.125532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.125545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.125638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.702 [2024-06-10 12:18:04.125651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.702 qpair failed and we were unable to recover it. 00:29:14.702 [2024-06-10 12:18:04.125803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.125816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.125974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.125987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.126213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.126227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.126329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.126342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.126511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.126525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.126617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.126630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.126721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.126734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.126889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.126902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.127036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.127049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.127167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.127181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.127335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.127348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.127588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.127602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.127749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.127762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.128901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.128995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.129007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.129101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.129114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.129261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.129275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.129335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.129347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.129509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.703 [2024-06-10 12:18:04.129523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.703 qpair failed and we were unable to recover it. 00:29:14.703 [2024-06-10 12:18:04.129742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.129755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.129939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.129953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.130033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.130045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.130206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.130219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.130378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.130392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.130489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.130502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.131138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.131163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.131361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.131375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.131639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.131653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.131734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.131747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.131960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.131973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.132187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.132200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.132359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.132372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.132468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.132486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.132648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.132661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.132890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.132903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.132987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.133103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.133208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.133299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.133526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.133622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.133785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.133799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.134032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.134045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.704 [2024-06-10 12:18:04.134207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.704 [2024-06-10 12:18:04.134220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.704 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.134367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.134381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.134619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.134633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.134771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.134784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.134929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.134943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.135024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.135036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.135291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.135304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.135375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.135389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.135555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.135568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.135715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.135728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.135941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.135955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.136193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.136211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.136369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.136383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.136538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.136552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.136741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.136754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.136913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.136927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.137875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.137887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.138037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.138051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.138160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.138173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.705 [2024-06-10 12:18:04.138338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.705 [2024-06-10 12:18:04.138351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.705 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.138440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.138454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.138564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.138576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.138813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.138827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.138935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.138949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.139129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.139300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.139413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.139516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.139613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.139813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.139994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.140007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.140087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.140099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.140261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.140274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.140491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.140505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.140667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.140681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.140874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.140887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.141068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.141080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.141228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.141241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.706 [2024-06-10 12:18:04.141426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.706 [2024-06-10 12:18:04.141440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.706 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.141509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.141521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.141686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.141699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.141884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.141899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.142060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.142073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.142323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.142337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.142524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.142538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.142705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.142718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.142865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.142877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.142957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.142969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.143243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.143257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.143351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.143363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.143547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.143561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.143662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.143676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.143767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.143780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.143966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.143979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.144141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.144153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.144289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.144302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.144450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.144463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.144687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.144722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.144837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.144857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.145035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.145054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.145160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.145177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.145353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.145376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.145491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.145509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.707 [2024-06-10 12:18:04.145610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.707 [2024-06-10 12:18:04.145625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.707 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.145726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.145739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.145823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.145836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.145984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.145997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.146153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.146166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.146295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.146316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.146488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.146507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.146682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.146699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.146870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.146884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.147876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.147890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.148950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.148962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.149057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.149069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.149217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.149230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.149412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.149426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.149642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.149655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.149739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.149751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.149846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.149859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.150094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.150107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.150340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.150353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.150518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.150531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.150700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.150714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.150868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.150881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.150959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.150971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.151137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.151151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.151251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.993 [2024-06-10 12:18:04.151264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.993 qpair failed and we were unable to recover it. 00:29:14.993 [2024-06-10 12:18:04.151394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.151407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.151566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.151580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.151675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.151687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.151816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.151830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.152906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.152924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.153090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.153107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.153224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.153241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.153407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.153424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.153529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.153547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.153704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.153722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.153821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.153839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.154081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.154098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.154325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.154342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.154454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.154472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.154656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.154675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.154861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.154883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.155981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.155994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.156140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.156154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.156257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.156270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.156420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.156433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.156530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.156543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.156694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.156708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.156890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.156903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.157011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.157024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.157177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.157191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.157354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.994 [2024-06-10 12:18:04.157367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.994 qpair failed and we were unable to recover it. 00:29:14.994 [2024-06-10 12:18:04.157507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.157520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.157730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.157743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.157835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.157849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.157968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.157980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.158963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.158976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.159084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.159096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.159249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.159262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.159356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.159368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.159557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.159571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.159715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.159728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.159879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.159892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.160045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.160057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.160202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.160214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.160372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.160385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.160496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.160510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.160677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.160690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.160905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.160918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.161877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.161891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.162952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.162964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.995 qpair failed and we were unable to recover it. 00:29:14.995 [2024-06-10 12:18:04.163130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.995 [2024-06-10 12:18:04.163143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.163971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.163984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.164963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.164980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.165967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.165979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.166072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.166085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.166189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.166203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.166384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.166398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.166566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.166579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.166740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.166753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.166839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.166852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.996 [2024-06-10 12:18:04.167914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.996 [2024-06-10 12:18:04.167928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.996 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.168927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.168940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.169053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.169066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.169231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.169243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.169401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.169414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.169566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.169579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.169828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.169842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.169930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.169942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.170030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.170042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.170221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.170234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.170320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.170333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.997 [2024-06-10 12:18:04.170441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.997 [2024-06-10 12:18:04.170453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.997 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.170577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.170590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.170658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.170669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.170857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.170870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.170940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.170952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.171896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.171910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.172835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.172848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.173949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.173962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.174201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.174215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.174391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.174403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.174500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.998 [2024-06-10 12:18:04.174513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.998 qpair failed and we were unable to recover it. 00:29:14.998 [2024-06-10 12:18:04.174592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.174605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.174687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.174700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.174765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.174787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.174883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.174896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.175117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.175275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.175435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.175621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.175816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.175910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.175989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.176100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.176285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.176485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.176571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.176789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.176966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.176979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.177979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.177993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.178891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.178903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.179977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.179989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:14.999 qpair failed and we were unable to recover it. 00:29:14.999 [2024-06-10 12:18:04.180078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.999 [2024-06-10 12:18:04.180090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.180983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.180995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.181144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.181158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.181309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.181323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.181418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.181432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.181512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.181525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.181622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.181635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.181851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.181865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.182033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.182046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.182143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.182156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.182249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.182261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.182485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.182498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.182660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.182673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.182757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.182769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.183895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.183908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.184848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.184861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.185066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.185079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.185171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.185184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.185264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.185277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.000 [2024-06-10 12:18:04.185349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.000 [2024-06-10 12:18:04.185362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.000 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.185503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.185516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.185597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.185610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.186512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.186539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.186781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.186795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.186956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.186970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.187920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.187933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.188965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.188979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.189938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.189951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.190870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.190882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.191046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.191059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.191136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.191147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.191242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.191255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.191471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.001 [2024-06-10 12:18:04.191491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.001 qpair failed and we were unable to recover it. 00:29:15.001 [2024-06-10 12:18:04.191639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.191652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.191747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.191759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.191855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.191868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.192900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.192987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.193102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.193197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.193358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.193583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.193812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.193924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.193937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.194979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.194992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.195228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.195242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.195324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.195336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.195419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.195432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.195557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.195571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.195758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.195772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.195875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.002 [2024-06-10 12:18:04.195888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.002 qpair failed and we were unable to recover it. 00:29:15.002 [2024-06-10 12:18:04.196042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.196985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.196999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.197949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.197962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.198938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.198951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.199896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.199995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.003 qpair failed and we were unable to recover it. 00:29:15.003 [2024-06-10 12:18:04.200949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.003 [2024-06-10 12:18:04.200961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.201871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.201883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.202964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.202977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.203166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.203178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.203326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.203339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.203487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.203501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.203588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.203599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.203692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.203704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.203864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.203877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.204898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.204990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.205948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.205966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.206067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.004 [2024-06-10 12:18:04.206084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.004 qpair failed and we were unable to recover it. 00:29:15.004 [2024-06-10 12:18:04.206176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.206193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.206287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.206305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.206395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.206413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.206565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.206584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.206748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.206766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.206875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.206892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.206998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.207118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.207306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.207426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.207636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.207759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.207892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.207910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.208921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.208935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.209919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.209932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.210093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.210105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.210209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.210222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.210366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.210378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.210460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.210472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.210694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.210708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.210881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.210894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.211162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.211176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.211257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.211269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.211351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.005 [2024-06-10 12:18:04.211364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.005 qpair failed and we were unable to recover it. 00:29:15.005 [2024-06-10 12:18:04.211457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.211470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.211574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.211587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.211756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.211770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.211851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.211863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.212961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.212973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.213896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.213992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.214854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.214866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.215929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.215940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.006 qpair failed and we were unable to recover it. 00:29:15.006 [2024-06-10 12:18:04.216051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.006 [2024-06-10 12:18:04.216063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.216850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.216863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.217981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.217994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.218963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.218975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.007 [2024-06-10 12:18:04.219741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.007 qpair failed and we were unable to recover it. 00:29:15.007 [2024-06-10 12:18:04.219839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.219852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.220966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.220979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.221925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.221950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.222957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.222969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.223960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.223978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.224960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.224974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.225057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.225069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.225171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.225184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.225252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.225265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.225354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.225369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.225533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.008 [2024-06-10 12:18:04.225548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.008 qpair failed and we were unable to recover it. 00:29:15.008 [2024-06-10 12:18:04.225647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.225661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.225772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.225785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.225956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.225970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.226985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.226999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.227929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.227941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.228910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.228923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.229874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.229887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.230043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.230056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.230301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.230314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.230483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.230496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.009 [2024-06-10 12:18:04.230582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.009 [2024-06-10 12:18:04.230597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.009 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.230708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.230721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.230823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.230836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.230938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.230951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.231973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.231986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.232925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.232938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.233084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.233265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.233443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.233619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.233733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.233841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.233995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.234939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.234952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.235040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.235052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.235140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.235153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.235247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.235260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.235432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.010 [2024-06-10 12:18:04.235445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.010 qpair failed and we were unable to recover it. 00:29:15.010 [2024-06-10 12:18:04.235610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.235623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.235777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.235792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.235951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.235964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.236958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.236969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.237780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.237792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.238908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.238921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.239015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.239219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.239409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.239590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.239702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.011 [2024-06-10 12:18:04.239873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.011 qpair failed and we were unable to recover it. 00:29:15.011 [2024-06-10 12:18:04.239964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.239981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.240098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.240116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.240289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.240306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.240446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.240464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.240603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.240617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.240711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.240724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.240896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.240910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.241939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.241952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.242906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.242998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.012 [2024-06-10 12:18:04.243742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.012 qpair failed and we were unable to recover it. 00:29:15.012 [2024-06-10 12:18:04.243909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.243922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.243999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.244883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.244999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.245955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.245967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.013 [2024-06-10 12:18:04.246747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.013 qpair failed and we were unable to recover it. 00:29:15.013 [2024-06-10 12:18:04.246824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.246837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.246922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.246934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.247943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.247956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.248893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.248907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.014 qpair failed and we were unable to recover it. 00:29:15.014 [2024-06-10 12:18:04.249850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.014 [2024-06-10 12:18:04.249862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.249951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.249964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.250931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.250944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.251910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.251997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.015 [2024-06-10 12:18:04.252880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.015 [2024-06-10 12:18:04.252892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.015 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.253902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.253915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.254954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.254967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.016 [2024-06-10 12:18:04.255930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.016 [2024-06-10 12:18:04.255943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.016 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.256926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.256939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.257153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.257166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.257385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.257398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.257555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.257569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.257716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.257729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.257824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.257838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.257948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.258187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.258201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.258300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.258313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.258482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.258494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.258589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.258603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.258760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.258772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.258844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.258857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.259017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.259030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.259125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.259138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.259287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.259300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.259450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.259464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.017 [2024-06-10 12:18:04.259555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.017 [2024-06-10 12:18:04.259569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.017 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.259650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.259663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.259744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.259756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.259826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.259839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.259924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.259936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.260018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.260037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.260197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.260210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.260401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.260414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.260590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.260603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.260754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.260767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.260859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.260872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.261945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.261959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.262047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.262059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.018 [2024-06-10 12:18:04.262197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.018 [2024-06-10 12:18:04.262210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.018 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.262360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.262374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.262456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.262469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.262631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.262645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.262727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.262741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.262812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.262825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.262897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.262910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.263976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.263990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.264924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.264937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.265103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.265116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.265211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.265224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.265325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.265338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.265494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.265508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.265645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.019 [2024-06-10 12:18:04.265657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.019 qpair failed and we were unable to recover it. 00:29:15.019 [2024-06-10 12:18:04.265737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.265750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.265836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.265849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.265919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.265932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.266947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.266960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.267174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.267187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.267368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.267381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.267486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.267500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.267651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.267664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.267836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.267849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.268046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.268060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.268137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.268149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.268236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.268249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.268399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.268412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.268589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.268603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.268769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.268782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.269012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.269026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.269108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.269120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.269301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.269314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.020 qpair failed and we were unable to recover it. 00:29:15.020 [2024-06-10 12:18:04.269487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.020 [2024-06-10 12:18:04.269500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.269585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.269598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.269676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.269689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.269769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.269782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.269864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.269877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.270015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.270028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.270250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.270263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.270355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.270368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.270632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.270648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.270813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.270826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.270928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.270941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.271101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.271114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.271271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.271284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.271443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.271456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.271716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.271731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.271816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.271830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.271976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.271989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.272159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.272173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.272420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.272434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.272590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.272603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.272841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.272855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.273069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.273082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.273245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.273258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.273416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.273430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.273602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.273615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.273692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.273704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.273856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.273869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.274031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.274044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.274141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.274154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.274231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.021 [2024-06-10 12:18:04.274244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.021 qpair failed and we were unable to recover it. 00:29:15.021 [2024-06-10 12:18:04.274346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.274360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.274507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.274521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.274603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.274616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.274719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.274732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.274975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.274988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.275173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.275286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.275395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.275561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.275674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.275903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.275997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.276010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.276163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.276176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.276330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.276344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.276566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.276580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.276677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.276690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.276854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.276867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.277048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.277062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.277159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.277174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.277317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.277331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.277500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.277513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.277659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.277672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.277912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.277926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.278037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.278050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.278220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.278234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.278468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.278512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.278619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.278632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.278726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.278737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.278834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.278848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.279002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.279015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.279162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.279176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.279286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.279299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.279368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.022 [2024-06-10 12:18:04.279380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.022 qpair failed and we were unable to recover it. 00:29:15.022 [2024-06-10 12:18:04.279532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.279546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.279706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.279720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.279888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.279902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.280049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.280062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.280239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.280253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.280385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.280398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.280567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.280586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.280795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.280808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.280893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.280906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.281065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.281079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.281142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.281154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.281325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.281339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.281503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.281517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.281633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.281646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.281829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.281843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.282965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.282978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.283077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.283090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.283158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.023 [2024-06-10 12:18:04.283170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.023 qpair failed and we were unable to recover it. 00:29:15.023 [2024-06-10 12:18:04.283253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.283268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.283505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.283519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.283619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.283632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.283787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.283801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.283892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.283905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.284133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.284361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.284466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.284703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.284808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.284902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.284992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.285159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.285281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.285508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.285611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.285842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.285936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.285947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.286147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.286160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.286326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.286339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.286563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.024 [2024-06-10 12:18:04.286577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.024 qpair failed and we were unable to recover it. 00:29:15.024 [2024-06-10 12:18:04.286683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.286696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.286862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.286875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.287982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.287995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.288060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.288072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.288284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.288297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.288513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.288526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.288672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.288686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.288832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.288845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.288958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.288972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.289224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.289237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.289317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.289330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.289503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.289516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.289611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.289626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.289713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.289726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.289893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.289906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.025 qpair failed and we were unable to recover it. 00:29:15.025 [2024-06-10 12:18:04.290006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.025 [2024-06-10 12:18:04.290019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.290152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.290165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.290326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.290339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.290486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.290499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.290636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.290649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.290865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.290878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.290968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.290981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.291143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.291156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.291352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.291365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.291526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.291540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.291768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.291781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.291951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.291964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.292962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.292975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.293124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.293137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.293296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.293309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.293376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.293388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.293566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.293579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.293727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.293740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.293842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.293863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.026 [2024-06-10 12:18:04.294114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.026 [2024-06-10 12:18:04.294132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.026 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.294221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.294239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.294327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.294341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.294456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.294469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.294565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.294579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.294815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.294827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.294936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.294949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.295111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.295124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.295285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.295298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.295469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.295488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.295571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.295583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.295677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.295690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.295927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.295942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.296902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.296915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.297017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.297030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.297135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.297148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.297233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.297245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.297421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.297434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.297594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.297607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.027 [2024-06-10 12:18:04.297723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.027 [2024-06-10 12:18:04.297736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.027 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.297832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.297845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.298938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.298951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.299166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.299179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.299277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.299290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.299515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.299529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.299647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.299659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.299842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.299855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.300980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.300994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.301143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.301156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.301239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.301252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.301350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.301363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.028 qpair failed and we were unable to recover it. 00:29:15.028 [2024-06-10 12:18:04.301600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.028 [2024-06-10 12:18:04.301613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.301715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.301728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.301837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.301851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.301990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.302240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.302361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.302534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.302659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.302780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.302954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.302966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.303151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.303164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.303368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.303381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.303469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.303486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.303651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.303665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.303826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.303840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.303983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.303996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.304069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.304082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.304232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.304245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.304331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.304344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.304440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.304453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.304623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.304637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.304769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.304781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.305019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.305033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.305230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.305243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.305344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.305356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.029 [2024-06-10 12:18:04.305544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.029 [2024-06-10 12:18:04.305557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.029 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.305772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.305786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.305895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.305908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.306006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.306019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.306183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.306196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.306412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.306425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.306609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.306623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.306774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.306787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.306956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.306969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.307934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.307948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.308162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.308175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.308389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.308403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.308562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.308576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.308671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.308684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.308908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.308921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.309084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.309097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.309318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.309332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.309490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.309504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.309607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.309620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.309857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.309871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.309950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.030 [2024-06-10 12:18:04.309963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.030 qpair failed and we were unable to recover it. 00:29:15.030 [2024-06-10 12:18:04.310151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.310164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.310263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.310276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.310466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.310502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.310721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.310734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.310889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.310901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.310996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.311102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.311290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.311406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.311573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.311751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.311977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.311989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.312147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.312160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.312297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.312311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.312408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.312420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.312599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.312612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.312725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.312738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.312903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.312917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.313070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.313083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.031 [2024-06-10 12:18:04.313251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.031 [2024-06-10 12:18:04.313263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.031 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.313413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.313426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.313601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.313615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.313780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.313793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.313874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.313887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.314046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.314059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.314296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.314309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.314471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.314489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.314579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.314591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.314763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.314776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.314911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.314924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.315085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.315100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.315330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.315343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.315508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.315521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.315780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.315793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.315867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.315880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.316094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.316107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.316216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.316229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.316394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.316407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.316644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.316658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.316819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.316832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.317046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.317060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.317244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.317257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.317360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.317373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.317536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.317550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.317718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.317732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.317945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.317959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.318126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.318138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.318286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.318300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.318377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.318389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.318536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.318549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.318737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.318749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.032 qpair failed and we were unable to recover it. 00:29:15.032 [2024-06-10 12:18:04.318896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.032 [2024-06-10 12:18:04.318909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.319970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.319983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.320199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.320212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.320399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.320412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.320634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.320647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.320806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.320819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.320955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.320967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.321131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.321144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.321293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.321306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.321470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.321488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.321583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.321596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.321752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.321765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.321979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.321992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.322169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.322182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.322272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.322284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.322400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.322413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.322569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.322582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.322786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.322800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.322955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.322968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.323115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.323128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.323307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.323320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.323403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.323416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.323496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.323508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.323661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.323674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.323823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.323835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.324967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.324981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.033 [2024-06-10 12:18:04.325096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.033 [2024-06-10 12:18:04.325108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.033 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.325195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.325208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.325367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.325380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.325538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.325552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.325742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.325755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.325959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.325972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.326078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.326091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.326322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.326335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.326427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.326441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.326595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.326609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.326706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.326719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.326896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.326910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.327021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.327034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.327207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.327220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.327464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.327482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.327620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.327633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.327765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.327777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.327939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.327952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.328061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.328075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.328172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.328185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.328357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.328371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.328583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.328597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.328748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.328761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.328863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.328876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.329025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.329038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.329154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.329168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.329336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.329349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.329514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.329527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.329740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.329753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.329897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.329910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.330775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.330789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.331064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.331078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.331261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.034 [2024-06-10 12:18:04.331274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.034 qpair failed and we were unable to recover it. 00:29:15.034 [2024-06-10 12:18:04.331488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.331502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.331599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.331611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.331767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.331781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.332972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.332988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.333102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.333115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.333276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.333290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.333556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.333570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.333651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.333663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.333743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.333755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.333853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.333866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.334009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.334022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.334235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.334247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.334340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.334352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.334506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.334520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.334760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.334774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.334867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.334879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.335080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.335252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.335446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.335549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.335714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.335832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.335988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.336001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.336181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.336195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.336343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.336356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.336502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.336515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.336674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.336687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.336866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.336879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.337037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.337050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.337146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.337160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.337323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.337336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.337421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.035 [2024-06-10 12:18:04.337434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.035 qpair failed and we were unable to recover it. 00:29:15.035 [2024-06-10 12:18:04.337587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.337601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.337701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.337715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.337816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.337828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.337895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.337906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.338927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.338940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.339867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.339879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.340056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.340235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.340345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.340544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.340648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.340808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.340992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.341975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.341988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.342088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.036 [2024-06-10 12:18:04.342101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.036 qpair failed and we were unable to recover it. 00:29:15.036 [2024-06-10 12:18:04.342261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.342428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.342534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.342635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.342756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.342872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.342976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.342988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.343141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.343154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.343298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.343312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.343456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.343469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.343554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.343567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.343724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.343737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.343897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.343911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.344059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.344072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.344221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.344234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.344330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.344344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.344497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.344510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.344674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.344690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.344841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.344854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.345977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.345990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.346916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.346930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.347920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.347933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.348091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.348104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.037 [2024-06-10 12:18:04.348202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.037 [2024-06-10 12:18:04.348216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.037 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.348304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.348317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.348534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.348548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.348618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.348630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.348784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.348797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.348905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.348918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.349919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.349932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.350013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.350025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.350125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.350138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.350285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.350298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.350530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.350545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.350691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.350705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.350827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.350840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.351782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.351998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.352011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.352160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.352175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.352334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.352347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.352462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.352474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.352681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.352695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.352830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.352845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.352992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.353172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.353295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.353400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.353610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.353785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.353984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.353997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.354230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.354244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.354393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.354406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.354557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.038 [2024-06-10 12:18:04.354570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.038 qpair failed and we were unable to recover it. 00:29:15.038 [2024-06-10 12:18:04.354656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.354669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.354776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.354789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.354874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.354886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.355033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.355047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.355130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.355143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.355659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.355684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.355882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.355896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.355984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.355998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.356975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.356989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.357849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.357994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.358962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.358974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.359125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.359138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.359316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.359329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.359498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.359512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.359660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.359673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.359775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.359788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.359936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.359948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.360886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.039 [2024-06-10 12:18:04.360899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.039 qpair failed and we were unable to recover it. 00:29:15.039 [2024-06-10 12:18:04.361007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.361937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.361951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.362958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.362971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.363147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.363316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.363410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.363507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.363676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.363770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.363998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.364851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.364864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.365866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.040 [2024-06-10 12:18:04.365879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.040 qpair failed and we were unable to recover it. 00:29:15.040 [2024-06-10 12:18:04.366043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.366931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.366945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.367949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.367960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.368917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.368930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.369909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.369921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.370070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.041 [2024-06-10 12:18:04.370083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.041 qpair failed and we were unable to recover it. 00:29:15.041 [2024-06-10 12:18:04.370162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.370174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.370322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.370335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.370424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.370437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.370588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.370754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.370767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.370846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.370858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.371953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.371966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.372896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.372909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.373914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.373927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.374077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.374090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.374173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.374186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.042 [2024-06-10 12:18:04.374277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.042 [2024-06-10 12:18:04.374290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.042 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.374395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.374409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.374517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.374530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.374692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.374706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.374803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.374816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.374969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.374983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.375976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.375989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.376162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.376274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.376379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.376493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.376668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.376910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.376991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.377825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.377987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.378000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.378155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.378168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.378314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.378327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.378405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.378417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.378502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.043 [2024-06-10 12:18:04.378514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.043 qpair failed and we were unable to recover it. 00:29:15.043 [2024-06-10 12:18:04.378681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.378695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.378789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.378803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.378956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.378969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.379845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.379858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.380920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.380932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.381965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.381978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.382126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.382140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.382221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.382233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.382456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.382469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.382583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.382603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.382793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.382811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.382970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.044 [2024-06-10 12:18:04.382986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.044 qpair failed and we were unable to recover it. 00:29:15.044 [2024-06-10 12:18:04.383097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.383116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.383303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.383321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.383419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.383436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.383539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.383553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.383705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.383718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.383830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.383843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.383994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.384983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.384996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.385968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.385981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.386141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.386153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.386328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.386340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.386422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.386435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.386538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.386551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.386766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.386780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.386881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.386894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.387068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.387081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.387235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.387248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.387397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.387410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.045 [2024-06-10 12:18:04.387552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.045 [2024-06-10 12:18:04.387565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.045 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.387711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.387724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.387873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.387886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.388124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.388216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.388313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.388474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.388630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.388812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.388991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.389188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.389287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.389456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.389617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.389797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.389972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.389985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.390125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.390138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.390376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.390389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.390474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.390492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.390609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.390622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.390768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.390781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.390882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.390895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.391042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.391055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.391143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.391155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.391302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.391315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.391516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.391529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.391616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.391630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.046 [2024-06-10 12:18:04.391787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.046 [2024-06-10 12:18:04.391800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.046 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.391965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.391978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.392139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.392152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.392309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.392321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.392485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.392499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.392665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.392679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.392826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.392838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.392936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.392949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.393907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.393919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.394066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.394078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.394237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.394250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.394485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.394499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.394603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.394616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.394809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.394822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.395050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.395064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.395209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.395222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.395390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.395403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.395570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.395583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.395744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.395757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.395836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.395849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.047 [2024-06-10 12:18:04.396014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.047 [2024-06-10 12:18:04.396028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.047 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.396173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.396186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.396399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.396412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.396510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.396524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.396695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.396707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.396801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.396815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.397961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.397974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.398122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.398135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.398380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.398393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.398507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.398520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.398669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.398682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.398849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.398862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.398972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.398984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.399198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.399211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.399360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.399373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.399612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.399625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.399840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.399853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.400017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.400029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.400123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.400137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.400322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.400335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.400446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.400459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.400705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.400719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.400894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.400908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.401078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.401090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.401256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.401269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.401427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.401440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.401597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.401610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.401691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.401703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.401886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.401901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.402050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.402062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.402168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.402181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.402361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.402373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.402545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.048 [2024-06-10 12:18:04.402571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.048 qpair failed and we were unable to recover it. 00:29:15.048 [2024-06-10 12:18:04.402674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.402687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.402866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.402879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.403059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.403072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.403143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.403155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.403391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.403404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.403556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.403569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.403790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.403804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.404030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.404043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.404138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.404151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.404269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.404283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.404452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.404465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.404627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.404640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.404876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.404890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.405105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.405118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.405217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.405230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.405443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.405456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.405589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.405603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.405760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.405773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.405883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.405895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.406909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.406921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.407081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.407095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.407176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.407188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.407427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.407440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.407604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.407617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.407829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.407843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.407939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.407952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.408114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.408127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.408222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.408235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.408396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.408410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.408552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.408567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.408723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.408736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 12:18:04.408917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.049 [2024-06-10 12:18:04.408930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.409012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.409024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.409131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.409144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.409319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.409332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.409504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.409517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.409729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.409741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.409899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.410074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.410159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.410287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.410575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.410681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.410844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.410996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.411009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.411229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.411243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.411408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.411421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.411530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.411543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.411750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.411763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.411950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.411964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.412117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.412130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.412246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.412258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.412473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.412499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.412614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.412628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.412726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.412738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.412833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.412846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.413064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.413077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.413236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.413250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.413356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.050 [2024-06-10 12:18:04.413369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 12:18:04.413527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.413540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.413716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.413730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.413826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.413839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.414004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.414018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.414162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.414175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.414320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.414333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.414572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.414586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.414800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.414813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.414892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.414904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.415156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.415169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.415327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.415342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.415437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.415451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.415643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.415656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.415727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.415740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.415997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.416010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.416124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.416137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.416309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.416322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.416472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.416491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.416669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.416682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.416896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.416909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.417016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.417135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.417334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.417498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.417665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.417827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.417988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.051 [2024-06-10 12:18:04.418001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.051 qpair failed and we were unable to recover it. 00:29:15.051 [2024-06-10 12:18:04.418115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.418128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.418282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.418296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.418445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.418457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.418629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.418643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.418800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.418813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.418926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.418939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.419089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.419103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.419268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.419281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.419461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.419474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.419692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.419706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.419856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.419869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.420004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.420017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.420165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.420179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.420341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.420355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.420499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.420513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.420697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.420710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.420880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.420894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.421929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.421946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.422096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.422109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.422211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.422224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.422392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.422406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.422566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.422580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.422738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.422752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.422914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.422928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.423114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.423127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-06-10 12:18:04.423227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-06-10 12:18:04.423241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.423393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.423406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.423570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.423583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.423684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.423698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.423845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.423858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.423945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.423957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.424174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.424187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.424448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.424462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.424632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.424646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.424805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.424819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.424977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.424990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.425176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.425189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.425294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.425307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.425396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.425408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.425555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.425569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.425730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.425743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.425908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.425921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.426003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.426015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.426115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.426128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.426394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.426407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.426563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.426577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.426664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.426676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.426846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.426860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.427012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.427025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.427266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.427279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.427436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.427450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.427684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.427698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.427910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.427923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.428027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.428040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.428227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.428240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.428337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.428350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.428521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.428535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.428682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.428697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-06-10 12:18:04.428783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-06-10 12:18:04.428795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.428915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.428927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.429019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.429032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.429279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.429292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.429518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.429532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.429624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.429638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.429808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.429821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.429902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.429914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.430094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.430107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.430266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.430279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.430461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.430474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.430643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.430657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.430740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.430752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.430991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.431093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.431211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.431327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.431568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.431796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.431980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.431993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.432078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.432090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.432264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.432277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.432427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.432440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.432597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.432610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.432767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.432780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.432932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.432945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.433967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.433980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.434242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.434255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-06-10 12:18:04.434351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-06-10 12:18:04.434364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.434451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.434463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.434637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.434661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.434766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.434784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.435056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.435074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.435244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.435265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.435376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.435394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.435503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.435520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.435706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.435723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.435907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.435924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.436972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.436985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.437842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.437854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.438012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.438025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.438181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.438195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.438433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.438447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.438595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.438608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.438771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.438784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.438894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.438907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.439965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.439978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.440129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.440142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-06-10 12:18:04.440333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-06-10 12:18:04.440346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.440496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.440509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.440683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.440696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.440791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.440804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.440901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.440915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.441096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.441109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.441256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.441269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.441485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.441501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.441623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.441636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.441817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.441830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.441995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.442112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.442218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.442337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.442503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.442692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.442866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.442880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.443985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.443998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.444203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.444216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.444325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.444337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.444503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.444516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.444615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.444627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.444772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.444786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.444884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.444897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.445043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.445057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.445212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.445226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-06-10 12:18:04.445381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-06-10 12:18:04.445394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.445553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.445567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.445671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.445689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.445809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.445827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.445918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.445935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.446982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.446996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.447144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.447157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.447311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.447327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.447495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.447508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.447605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.447618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.448503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.448529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.448693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.448708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.448950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.448963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.449976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.449989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.450205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.450219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.450486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.450500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.450663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.450676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.450777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.450790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.450892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.450906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.451088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.451101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.451190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.451203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.451387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.451400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.451499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.451512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-06-10 12:18:04.451602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-06-10 12:18:04.451615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.451703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.451717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.451946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.451959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.452908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.452921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.453937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.453950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.454040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.454053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.454199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.454214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.454367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.454381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.454475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.454494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.454668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.454681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.454897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.454910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.455869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.455881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.456949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-06-10 12:18:04.456962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-06-10 12:18:04.457066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.457235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.457408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.457519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.457770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.457866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.457970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.457984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.458220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.458233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.458323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.458335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.458429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.458442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.458626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.458639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.458826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.458839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.459935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.459947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.460029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.460041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.460143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.460155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.460327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.460341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.460489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.460503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.460740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.460753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.460909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.460923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.461025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.461038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.461260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.461273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.461430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.461442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.461552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.461566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.461747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.461760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.461918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.461932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.462041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.462054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.462127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.462139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.462299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.462313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.462493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.462506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.462770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.462783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.462898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.462912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-06-10 12:18:04.463011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-06-10 12:18:04.463024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.463950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.463963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.464126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.464141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.464240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.464252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.464360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.464372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.464462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.464474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.464729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.464744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.464900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.464913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.465822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.465834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.466061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.466223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.466435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.466556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.466658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.466846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.466992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.467832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.467844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.468060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.468073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.468228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.468240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.468322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.468334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.468486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.468499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-06-10 12:18:04.468603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-06-10 12:18:04.468615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.468706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.468717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.468800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.468813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.468975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.468988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.469073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.469086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.469270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.469283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.469449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.469462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.469686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.469699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.469917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.469930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.470914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.470926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.471965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.471979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.472902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.472915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.473021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.473034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.473195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-06-10 12:18:04.473208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-06-10 12:18:04.473301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.473313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.473486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.473499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.473586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.473598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.473698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.473712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.473802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.473814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.473906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.473919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.474967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.474981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.475951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.475964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.476152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.476165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.476243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.476256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.476494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.476507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.476644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.476660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.476834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.476847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.477858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.477870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-06-10 12:18:04.478878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-06-10 12:18:04.478891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.478976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.478989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.479947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.479960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.480943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.480955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.481062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.481177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.481338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.481441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.481672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.481833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.481996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.482008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.482155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.482168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.482274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.482287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.482436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.482449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.482633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.482647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.482738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.482750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.482990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.483898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.483997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.484009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.484161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-06-10 12:18:04.484174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-06-10 12:18:04.484253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.484265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.484348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.484360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.484443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.484457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.484556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.484570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.484784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.484797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.484889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.484902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.484997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.485010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.485105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.485118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.485269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.485282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.485362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.485374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.485461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-06-10 12:18:04.485473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-06-10 12:18:04.485627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.485641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.485780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.485794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.485941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.485955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.486032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.486044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.486259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.486273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.486373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.486387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.486546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.486560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.486708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.486722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.486834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.486847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.487831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.487842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.488913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.488925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.489904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.489917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.490064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.490077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.490222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.490235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.490315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.490328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.490516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.490530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-06-10 12:18:04.490612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-06-10 12:18:04.490625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.490800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.490813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.490954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.490967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.491914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.491927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.492098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.492111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.492332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.492347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.492429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.492441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.492596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.492610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.492716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.492729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.492971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.492984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.493970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.493983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.494874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.494887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.495037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.495130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.495364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.495549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.495712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.495821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.495996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.496009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.496214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.496228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.496403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-06-10 12:18:04.496416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-06-10 12:18:04.496514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.496526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.496615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.496628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.496776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.496789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.496869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.496882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.496981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.496994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.497142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.497155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.497320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.497334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.497433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.497446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.497558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.497571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.497663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.497676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.497845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.497859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.498903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.498916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.499907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.499920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.500028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.500042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.500257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.500271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.500436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.500449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.500600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.500613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.500877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-06-10 12:18:04.500891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-06-10 12:18:04.501009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.501114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.501206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.501351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.501514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.501698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.501866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.501880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.502840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.502854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.503903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.503917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.504092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.504107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.504256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.504269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.504362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.504375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.504483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.504496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.504608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.504622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.504716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.504729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.505844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.505857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.506049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.506063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.506285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.506298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.506444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.506458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.506561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.506590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-06-10 12:18:04.506762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-06-10 12:18:04.506777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.506926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.506940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.507106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.507119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.507270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.507284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.507448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.507461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.507633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.507646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.507832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.507846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.507927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.507940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.508036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.508050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.508156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.508169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.508350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.508363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.508455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.508468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.508653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.508667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.508885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.508898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.509954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.509967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.510909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.510998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.511912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.511925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.512072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.512085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.512237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.512251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.512441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-06-10 12:18:04.512455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-06-10 12:18:04.512590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.512603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.512703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.512717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.512880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.512892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.512972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.512984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.513093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.513106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.513197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.513210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.513362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.513375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.513529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.513543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.513690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.513703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.513855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.513868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.514983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.514995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.515171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.515185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.515341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.515354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.515463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.515480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.515583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.515596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.515754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.515767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.515935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.515948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.516980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.516993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-06-10 12:18:04.517957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-06-10 12:18:04.517970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.518847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.518860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.519943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.519956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.520036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.520064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.520227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.520240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.520405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.520418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.520607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.520621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.520719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.520733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.520882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.520895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.521926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.521939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.522974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.522987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.523137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.523150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.523326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-06-10 12:18:04.523339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-06-10 12:18:04.523503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.523533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.523636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.523650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.523797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.523811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.523975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.523989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.524084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.524097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.524339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.524353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.524494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.524508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.524676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.524689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.524842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.524855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.525017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.525031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.525123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.525137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.525314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.525327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.525486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.525500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.525730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.525744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.525910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.525924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.526880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.526894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.527047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.527061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.527252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.527266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.527431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.527444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.527622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.527636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.527788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.527802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.527881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.527896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.528118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.528132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.528298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.528311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.528466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-06-10 12:18:04.528486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-06-10 12:18:04.528631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.528645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.528741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.528755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.528921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.528934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.529957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.529970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.530078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.530092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.530240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.530253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.530338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.530367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.530460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.530474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.530635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.530649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.530867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.530881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.531918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.531931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.532958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.532971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-06-10 12:18:04.533946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-06-10 12:18:04.533970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.534967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.534980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.535090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.535269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.535380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.535512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.535694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.535898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.535986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.536957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.536970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.537979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.537992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.538084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.538097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.538184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.538198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.538343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.538356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.538574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.538588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.538674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.538687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.538844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.538857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.539017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.539030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.539143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-06-10 12:18:04.539156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-06-10 12:18:04.539238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.539251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.539421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.539434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.539550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.539564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.539649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.539661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.539829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.539843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.539969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.539982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.540173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.540186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.540275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.540288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.540407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.540420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.540522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.540552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.540638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.540652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.540881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.540906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.541060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.541073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.541168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.541181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.541349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.541361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.541585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.541598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.541764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.541777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.541932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.541945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.542040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.542053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.542137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.542150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.542254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.542267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.542365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.542378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.542547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.542560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.542773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.542787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.543049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.543062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.543175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.543188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.543349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.543364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.543520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.543533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.543771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.543784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.543897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.543910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.544929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.544942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.545114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.545127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-06-10 12:18:04.545213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-06-10 12:18:04.545225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.545499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.545513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.545670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.545683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.545798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.545811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.545976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.545988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.546068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.546081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.546240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.546253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.546346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.546359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.546443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.546456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.546706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.546720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.546869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.546882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.547984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.547997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.548884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.548995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.549008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.549165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.549179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.549354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.549366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.549546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.549562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.549666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.549679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.549855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.549868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.550015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.550028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.550128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.550141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-06-10 12:18:04.550247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-06-10 12:18:04.550259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.550484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.550497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.550715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.550728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.550885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.550898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.551113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.551126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.551274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.551287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.551451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.551464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.551646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.551660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.551755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.551768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.551925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.551938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.552919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.552933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.553028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.553041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.553206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.553219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.553312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.553325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.553484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.553498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.553665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.553678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.553894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.553907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.554983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.554996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.555076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.555089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.555191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.555204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.555462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.555483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.555566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.555581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.555754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.555767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.555879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.555892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.556045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-06-10 12:18:04.556058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-06-10 12:18:04.556140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.556263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.556377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.556543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.556654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.556745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.556906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.556919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.557032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.557045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.557281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.557294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.557373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.557386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.557468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.557486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.557656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.557669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.557889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.557903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.558978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.558991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.559080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.559333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.559497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.559666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.559774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.559908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.559998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.560012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.560164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.560177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.560349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.560362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.560628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.560642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.560806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.560819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.561977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.561990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-06-10 12:18:04.562169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-06-10 12:18:04.562183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.562295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.562309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.562405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.562418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.562586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.562600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.562813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.562827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.562974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.562987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.563088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.563210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.563463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.563574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.563668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.563832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.563989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.564003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.564133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.564170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.564374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.564406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.564642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.564678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.564852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.564870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.565096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.565109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.565216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.565229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.565485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.565498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.565646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.565659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.565813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.565826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.565919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.565932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.566080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.566093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.566258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.566271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.566375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.566389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.566564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.566578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.566680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.566693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.566853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.566866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.567037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.567050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.567141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.567153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.567265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.567278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.567496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.567525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.567699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.567713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.567863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.567877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.568026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.568040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.568254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.568268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.568402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-06-10 12:18:04.568415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-06-10 12:18:04.568582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.568597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.568712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.568726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.568841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.568855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.569842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.569856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.570930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.570943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.571904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.571918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.572936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.572951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-06-10 12:18:04.573862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-06-10 12:18:04.573954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.573967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.574081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.574096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.574218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.574232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.574381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.574395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.574623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.574637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.574725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.574739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.574923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.574937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.575031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.575045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.575181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.575195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.575365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.575379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.575657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.575670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.575823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.575836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.575991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.576005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.576170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.576184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.576427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.576440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.576667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.576681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.576830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.576844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.577986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.577999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.578174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.578188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.578338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.578352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.578516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.578530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.578681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.578695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.578836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.578850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.578991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.579004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.579168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.579182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-06-10 12:18:04.579344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-06-10 12:18:04.579358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.579429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.579442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.579704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.579718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.579885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.579899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.580959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.580973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.581122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.581136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.581222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.581236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.581343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.581356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.581514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.581528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.581689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.581702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.581817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.581830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.582937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.582951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.583052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.583066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.583162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.583176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.583343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.583357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.583598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.583612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.583695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.583709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.583929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.583943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.584055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.584068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.584224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.584237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.584325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.584338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.584486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.584499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.584664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.584677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.584825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.584838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.585006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.585020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.585095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.585120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.585305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-06-10 12:18:04.585318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-06-10 12:18:04.585466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.585484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.585590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.585604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.585692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.585705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.585775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.585788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.585869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.585883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.586146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.586159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.586275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.586288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.586444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.586457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.586644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.586658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.586768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.586782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.586890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.586905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.587130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.587143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.587310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.587323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.587472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.587490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.587645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.587658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.587815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.587828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.587928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.587941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.588053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.588066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.588158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.588171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.588253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.588265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.588505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.588519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.588693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.588706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.588867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.588881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.589918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.589932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.590089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.590102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.590245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.590258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.590398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.590411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.590522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.590535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.590694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.590707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.590923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.590936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.591124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.591138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.591383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.591396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-06-10 12:18:04.591506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-06-10 12:18:04.591519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.591691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.591704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.591803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.591816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.591906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.591918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.592083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.592096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.592173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.592185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.592344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.592358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.592511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.592525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.592634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.592648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.592811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.592824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.593896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.593908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.594152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.594165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.594277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.594291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.594486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.594499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.594590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.594603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.594754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.594767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.594924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.594937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.595020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.595032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.595291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.595305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.595407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.595420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.595660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.595674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.595767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.595779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.595945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.595958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.596105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.596118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.596213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.596226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.596384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.596397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.596612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.596626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.596772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.596785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.596955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.596968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.597064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.597077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.597168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.597181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.597273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.597286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.597491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-06-10 12:18:04.597504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-06-10 12:18:04.597703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.597740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.597873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.597909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.598986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.598999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.599100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.599234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.599397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.599501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.599632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.599824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.599988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.600838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.600852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.601098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.601111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.601325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.601338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.601449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.601462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.601572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.601585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.601691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.601705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.601779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.601791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.602046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.602059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.602154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.602167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.602380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.602394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.602493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.602506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-06-10 12:18:04.602612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-06-10 12:18:04.602625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.602768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.602781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.602945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.602958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.603966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.603984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.604890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.604903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.605976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.605989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.606148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.606161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.606260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.606273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.606369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.606382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.606537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.606551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.606713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.606726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.606893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.606906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.607965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.607978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.608124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.608138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.608293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-06-10 12:18:04.608307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-06-10 12:18:04.608408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.608421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.608574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.608587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.608698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.608711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.608788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.608801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.608894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.608907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.608988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.609113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.609300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.609549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.609727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.609829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.609948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.609961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.610919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.610932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.611881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.611894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.612816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.612996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.613009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.613279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.613293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.613457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.613471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.613594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.613608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-06-10 12:18:04.613701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-06-10 12:18:04.613714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.613930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.613943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.614957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.614969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.615886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.615900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.616909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.616922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.617916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.617929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.618876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.618888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.619065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.619078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.619146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-06-10 12:18:04.619158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-06-10 12:18:04.619310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.619324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.619500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.619513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.619677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.619690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.619792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.619805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.619955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.619968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.620926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.620945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.621917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.621930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.622910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.622923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.623896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.623909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.624004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.624017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.624113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.624126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-06-10 12:18:04.624303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-06-10 12:18:04.624316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.624553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.624567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.624645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.624657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.624737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.624749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.624978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.624991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.625964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.625977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.626915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.626928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.627877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.627890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.628125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.628139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.628235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.628249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.628394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.628406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.628507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.628520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-06-10 12:18:04.628742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-06-10 12:18:04.628756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.628973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.628987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.629171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.629373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.629491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.629570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.629752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.629912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.629999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.630926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.630939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.631950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.631964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.632852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.632866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.633955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.633968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.634118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.634131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.634300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-06-10 12:18:04.634313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-06-10 12:18:04.634470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.634488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.634570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.634583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.634728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.634741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.634842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.634854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.634934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.634946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.635956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.635973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.636981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.636994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.637922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.637935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.638817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.638830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.639002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.639015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.639105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.639118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.377 [2024-06-10 12:18:04.639198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.377 [2024-06-10 12:18:04.639212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.377 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.639282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.639294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.639441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.639454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.639585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.639599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.639762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.639775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.639884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.639897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.640904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.640917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.641966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.641979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.642970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.642983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.643955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.643968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.644073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.644086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.644239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.644253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.644338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.644351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.644596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.644610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.378 qpair failed and we were unable to recover it. 00:29:15.378 [2024-06-10 12:18:04.644769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.378 [2024-06-10 12:18:04.644782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.645929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.645942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.646887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.646905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.647977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.647990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.648843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.648856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.649022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.649035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.649131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.649143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.649239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.649253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.649357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.649370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.649517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.379 [2024-06-10 12:18:04.649531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.379 qpair failed and we were unable to recover it. 00:29:15.379 [2024-06-10 12:18:04.649629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.649642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.649733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.649746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.649912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.649925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.650985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.650997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.651980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.651997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.652156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.652173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.652268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.652286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.652451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.652469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.652582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.652600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.652761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.652775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.652960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.652973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.653922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.653935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.654015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.654028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.654176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.654189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.654277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.654290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.654371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.654383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.380 qpair failed and we were unable to recover it. 00:29:15.380 [2024-06-10 12:18:04.654498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.380 [2024-06-10 12:18:04.654511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.654594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.654607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.654760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.654773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.654861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.654874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.654958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.654971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.655978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.655991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.656967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.656980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.657902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.657915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.658933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.658946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.659093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.659106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.659197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.659210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.659292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.381 [2024-06-10 12:18:04.659306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.381 qpair failed and we were unable to recover it. 00:29:15.381 [2024-06-10 12:18:04.659454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.659467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.659620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.659634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.659730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.659743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.659841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.659854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.659925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.659937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.660965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.660979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.661060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.661073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.661244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.661257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.661405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.661418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.661501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.661515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.661755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.661768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.661921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.661933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.662941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.662954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.663946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.663959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.664060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.664073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.664221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.664235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.664384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.664397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.664549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.664563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.382 [2024-06-10 12:18:04.664658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-06-10 12:18:04.664671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.382 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.664771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.664784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.664850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.664862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.664943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.664956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.665841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.665854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.666064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.666170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.666294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.666529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.666708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.666884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.666987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.667975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.667988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.668925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.668938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.669090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.669103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.669267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.669279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.669362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.669375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.669483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.669497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.669588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.383 [2024-06-10 12:18:04.669600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.383 qpair failed and we were unable to recover it. 00:29:15.383 [2024-06-10 12:18:04.669681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.669694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.669782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.669795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.669945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.669958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.670106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.670119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.670289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.670303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.670398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.670411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.670630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.670643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.670818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.670832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.670989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.671154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.671320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.671412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.671513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.671672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.671834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.671847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.672940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.672953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.673985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.673998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.674089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.674101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.674359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.384 [2024-06-10 12:18:04.674371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.384 qpair failed and we were unable to recover it. 00:29:15.384 [2024-06-10 12:18:04.674464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.674490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.674574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.674588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.674687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.674700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.674862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.674874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.675923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.675992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.676006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.676096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.676109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.676257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.676270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.676490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.676504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.676680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.676694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.676858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.676872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.677856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.677868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.678936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.678949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.385 qpair failed and we were unable to recover it. 00:29:15.385 [2024-06-10 12:18:04.679914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.385 [2024-06-10 12:18:04.679927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.680173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.680186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.680422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.680435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.680582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.680596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.680677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.680689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.680783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.680796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.680960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.680974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.681140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.681267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.681359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.681521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.681630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.681835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.681996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.682168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.682349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.682517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.682697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.682885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.682981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.682994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.683091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.683104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.683188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.683201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.683417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.683430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.683540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.683553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.683734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.683747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.683960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.683973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.684042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.684054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.684237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.684250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.684397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.684410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.684582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.684596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.684691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.684704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.684884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.684897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.685968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.685982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.686173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.686187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.686268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.686281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.386 [2024-06-10 12:18:04.686438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.386 [2024-06-10 12:18:04.686451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.386 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.686566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.686579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.686747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.686760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.686918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.686931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.687946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.687959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.688058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.688071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.688298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.688311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.688394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.688408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.688556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.688569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.688756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.688769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.689823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.689836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.690933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.690946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.691921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.691934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.692098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.692111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.692290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.692303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.692449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.692463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.692716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.692736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.692894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.692912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.387 [2024-06-10 12:18:04.693092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.387 [2024-06-10 12:18:04.693109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.387 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.693267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.693281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.693428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.693441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.693607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.693621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.693785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.693799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.693886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.693899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.694923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.694936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.695027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.695040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.695122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.695134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.695347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.695361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.695578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.695591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.695806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.695819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.695905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.695917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.696101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.696114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.696262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.696275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.696422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.696436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.696597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.696610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.696777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.696790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.696960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.696973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.697066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.697079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.697347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.697361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.697506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.697519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.697684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.697698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.697843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.697856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.697953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.697966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.698113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.698128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.698267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.698280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.698449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.698462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.698580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.698599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.698701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.698721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.698882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.388 [2024-06-10 12:18:04.698900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.388 qpair failed and we were unable to recover it. 00:29:15.388 [2024-06-10 12:18:04.699075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.699089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.699196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.699209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.699422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.699435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.699602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.699615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.699796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.699809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.699902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.699916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.700002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.700015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.700228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.700242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.700315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.700327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.700507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.700521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.700677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.700717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.700995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.701036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.701197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.701236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.701559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.701600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.701810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.701823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.701995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.702035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.702280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.702320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.702496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.702538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.702836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.702877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.703173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.703186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.703422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.703435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.703555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.703568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.703749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.703790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.704076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.704116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.704341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.704382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.704601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.704642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.704849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.704890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.705033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.705046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.705216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.705256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.705549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.705590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.705803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.705845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.705969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.705983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.706151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.706192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.706538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.706580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.706784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.706799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.706955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.706995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.707308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.707348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.707627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.707668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.707873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.707914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.708129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.708169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.708416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.708457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.708642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.708683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.708930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.708971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.709244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.709256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.389 [2024-06-10 12:18:04.709501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.389 [2024-06-10 12:18:04.709543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.389 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.709719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.709760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.709971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.710011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.710265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.710305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.710462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.710513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.710639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.710679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.710968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.711009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.711260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.711273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.711438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.711451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.711614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.711628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.711732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.711744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.711933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.711946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.712061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.712103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.712343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.712382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.712594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.712638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.712805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.712819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.712997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.713037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.713275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.713315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.713506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.713548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.713764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.713804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.714039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.714078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.714405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.714445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.714709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.714751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.715055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.715096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.715255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.715295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.715613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.715655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.715812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.715826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.715997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.716010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.716223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.716236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.716449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.716463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.716691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.716738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.716973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.717013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.717289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.717302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.717449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.717497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.717651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.717665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.717826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.717839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.717952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.717965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.718204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.718245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.718444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.718517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.718734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.718747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.719004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.719044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.719191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.719232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.719398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.719438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.719586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.719627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.719919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.719960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.390 [2024-06-10 12:18:04.720160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.390 [2024-06-10 12:18:04.720173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.390 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.720341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.720354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.720518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.720532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.720749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.720789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.720997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.721038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.721177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.721190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.721423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.721464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.721705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.721745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.721971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.722011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.722262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.722302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.722621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.722682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.722984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.722997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.723177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.723190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.723397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.723437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.723659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.723700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.723882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.723922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.724061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.724072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.724251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.724264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.724444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.724457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.724706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.724747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.724951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.724991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.725210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.725223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.725487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.725529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.725842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.725883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.726129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.726142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.726240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.726254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.726352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.726364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.726429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.726441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.726605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.726618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.726772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.726812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.727023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.727063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.727284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.727324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.727521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.727563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.727806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.727846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.728138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.728151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.728320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.728333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.728522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.728564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.728785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.728825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.729127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.729167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.729424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.729466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.729699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.729739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.729974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.730015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.730186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.730227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.730542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.730584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.730768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.730808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.391 qpair failed and we were unable to recover it. 00:29:15.391 [2024-06-10 12:18:04.731030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.391 [2024-06-10 12:18:04.731071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.731381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.731421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.731656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.731698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.731847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.731888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.732029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.732070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.732295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.732335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.732567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.732608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.732934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.732971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.733271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.733308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.733434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.733453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.733633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.733677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.733818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.733857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.734160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.734200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.734399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.734440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.734776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.734817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.735117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.735157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.735372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.735387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.735505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.735547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.735720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.735761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.735982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.736023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.736150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.736165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.736338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.736378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.736598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.736639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.736850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.736890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.737037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.737050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.737228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.737242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.737457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.737469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.738187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.738209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.738368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.738382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.738650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.738664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.738876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.738890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.739105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.739118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.739231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.739243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.739335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.739348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.739459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.739472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.739634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.739647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.739806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.739819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.740014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.740027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.740243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.740256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.740409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.392 [2024-06-10 12:18:04.740423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.392 qpair failed and we were unable to recover it. 00:29:15.392 [2024-06-10 12:18:04.740502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.740514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.740597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.740610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.740826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.740840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.740984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.740997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.741169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.741182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.741291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.741305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.741523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.741538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.741712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.741725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.741814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.741826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.741971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.741984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.742249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.742262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.742438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.742451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.742610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.742624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.742730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.742743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.742893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.742907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.743093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.743197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.743357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.743544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.743640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.743899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.743999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.744106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.744225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.744450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.744644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.744735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.744894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.744908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.745143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.745156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.745330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.745343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.745506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.745519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.745691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.745705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.745926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.745940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.746954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.746966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.747085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.747097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.747246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.747259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.747406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.747419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.747677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.747691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.747904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.747917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.748020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.748032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.748211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.748224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.748394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.748407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.393 qpair failed and we were unable to recover it. 00:29:15.393 [2024-06-10 12:18:04.748628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.393 [2024-06-10 12:18:04.748642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.748742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.748755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.748826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.748839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.748996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.749169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.749345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.749515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.749694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.749817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.749980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.749993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.750071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.750083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.750187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.750199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.750444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.750464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.750634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.750647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.750738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.750967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.750981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.751155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.751168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.751325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.751338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.751450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.751464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.751628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.751642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.751831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.751844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.752011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.752024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.752198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.752211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.752393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.752406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.752552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.752566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.752783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.752797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.752965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.752978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.753223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.753236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.753335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.753349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.753459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.753472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.753580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.753594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.753740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.753754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.753845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.753858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.394 qpair failed and we were unable to recover it. 00:29:15.394 [2024-06-10 12:18:04.754071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.394 [2024-06-10 12:18:04.754084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.754250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.754264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.754357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.754369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.754542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.754556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.754715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.754728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.754945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.754958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.755920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.755933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.756167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.756180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.756344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.756358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.756509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.756522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.756613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.756625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.756778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.756791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.756906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.756919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.757007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.757019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.757182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.757195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.757358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.757372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.757585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.757598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.757744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.757757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.757904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.757917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.758934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.758947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.759159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.759172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.759432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.759445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.759598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.759611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.759718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.759731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.759894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.759908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.760005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.395 [2024-06-10 12:18:04.760017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.395 qpair failed and we were unable to recover it. 00:29:15.395 [2024-06-10 12:18:04.760182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.760195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.760362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.760375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.760589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.760602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.760704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.760717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.760877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.760890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.761040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.761053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.761267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.761280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.761515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.761528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.761619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.761635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.761732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.761745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.761897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.761910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.762072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.762086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.762197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.762210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.762430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.762443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.762526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.762539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.762697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.762710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.762856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.762869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.763122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.763135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.763353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.763366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.763525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.763539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.763722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.763735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.763980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.763993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.764255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.764268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.764507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.764520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.764609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.764622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.764863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.764876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.765026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.765039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.765187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.765201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.765348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.765361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.765533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.765546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.765662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.765675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.765831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.765844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.766007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.766020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.766173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.766186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.766266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.766279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.766436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.766449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.396 [2024-06-10 12:18:04.766631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.396 [2024-06-10 12:18:04.766644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.396 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.766884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.766897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.767907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.767920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.768162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.768176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.768309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.768322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.768468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.768485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.768724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.768739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.768958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.768972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.769110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.769123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.769216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.769228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.769321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.769333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.769494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.769507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.769670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.769683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.769919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.769932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.770094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.770107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.770257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.770270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.770379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.770392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.770650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.770663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.770811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.770824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.770937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.770950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.771115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.771129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.771314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.771327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.771515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.771529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.771694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.771707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.771932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.771945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.772181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.772194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.397 [2024-06-10 12:18:04.772289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.397 [2024-06-10 12:18:04.772303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.397 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.772516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.772530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.772677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.772690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.772877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.772890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.773034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.773047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.773299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.773312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.773409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.773422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.773597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.773611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.773826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.773839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.773999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.774013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.774175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.774188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.774412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.774426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.774574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.774587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.774752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.774765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.774978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.774992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.775091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.775105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.775254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.775267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.775376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.775390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.775550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.775563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.775729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.775742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.775890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.775905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.776070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.776083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.776234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.776247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.776394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.776407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.776567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.776580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.776739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.776752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.776996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.777009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.777171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.777184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.777378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.777391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.777560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.777573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.777665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.777677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.777913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.777927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.778083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.778096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.778294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.778307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.778400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.778413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.778629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.778642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.778858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.778871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.779035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.779048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.779240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.398 [2024-06-10 12:18:04.779253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.398 qpair failed and we were unable to recover it. 00:29:15.398 [2024-06-10 12:18:04.779359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.779372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.779533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.779547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.779786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.779799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.779992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.780005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.780171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.780184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.780358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.780371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.780531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.780544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.780698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.780711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.780860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.780873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.781041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.781055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.781131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.781144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.781253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.781266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.781454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.781467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.781637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.781651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.781864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.781878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.782093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.782106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.782350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.782363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.782468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.782486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.782700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.782713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.782951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.782964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.783986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.783998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.784165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.784178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.784259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.784271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.784359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.784371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.784525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.784539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.784712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.784725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.784825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.784838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.785072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.785085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.785228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.785241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.785361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.785374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.785468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.785485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.785569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.785581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.399 qpair failed and we were unable to recover it. 00:29:15.399 [2024-06-10 12:18:04.785729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.399 [2024-06-10 12:18:04.785742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.785978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.785991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.786081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.786093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.786189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.786202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.786392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.786405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.786665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.786678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.786921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.786934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.787147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.787160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.787402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.787415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.787630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.787644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.787798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.787811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.788060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.788073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.788284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.788297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.788512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.788526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.788685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.788698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.788794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.788807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.788984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.788998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.789166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.789179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.789345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.789359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.789525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.789538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.789779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.789792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.789937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.789951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.790106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.790119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.790276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.790291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.790394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.790408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.790627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.790641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.790775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.790788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.790963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.790977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.791194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.791207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.791430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.791444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.791533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.791545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.791704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.791717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.791879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.791892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.791985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.791998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.792239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.792253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.792410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.792424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.792515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.792528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.792614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.792628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.792722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.400 [2024-06-10 12:18:04.792734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.400 qpair failed and we were unable to recover it. 00:29:15.400 [2024-06-10 12:18:04.792827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.792839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.793018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.793244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.793408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.793519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.793697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.793812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.793991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.794171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.794353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.794585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.794687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.794800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.794916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.794929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.795102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.795115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.795330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.795343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.795427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.795439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.795528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.795541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.795711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.795724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.795920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.795933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.796031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.796043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.796121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.796133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.796279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.796292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.796521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.796535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.796695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.796710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.796932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.796945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.797107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.797120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.797268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.797282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.797375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.797387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.797547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.797561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.797774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.401 [2024-06-10 12:18:04.797788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.401 qpair failed and we were unable to recover it. 00:29:15.401 [2024-06-10 12:18:04.798003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.798017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.798233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.798246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.798345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.798357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.798575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.798588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.798736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.798749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.798909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.798922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.799095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.799108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.799277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.799290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.799387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.799400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.799632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.799646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.799809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.799822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.799980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.799993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.800141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.800154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.800236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.800249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.800466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.800484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.800634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.800647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.800867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.800880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.800962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.800975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.801089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.801103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.801364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.801377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.801538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.801551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.801764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.801777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.801856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.801868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.802018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.802031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.802266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.802280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.802363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.802375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.802496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.802510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.802684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.802698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.802844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.802857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.803918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.803931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.804107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.402 [2024-06-10 12:18:04.804121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.402 qpair failed and we were unable to recover it. 00:29:15.402 [2024-06-10 12:18:04.804306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.804319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.804534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.804547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.804662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.804675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.804814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.804827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.804910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.804922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.805198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.805212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.805443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.805456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.805628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.805641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.805732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.805744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.805825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.805837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.805947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.805960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.806201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.806214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.806376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.806389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.806541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.806555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.806789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.806802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.806880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.806892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.807057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.807070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.807227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.807241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.807455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.807467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.807629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.807643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.807790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.807804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.807967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.807980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.808208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.808221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.808436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.808449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.808622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.808635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.808796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.808809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.809046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.809059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.809222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.809235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.809449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.809462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.809640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.809654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.809742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.809754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.809993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.810007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.810223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.810235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.810389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.810402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.810554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.810567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.810661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.810674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.810888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.810903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.403 qpair failed and we were unable to recover it. 00:29:15.403 [2024-06-10 12:18:04.811142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.403 [2024-06-10 12:18:04.811155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.811377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.811389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.811495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.811508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.811672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.811685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.811830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.811843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.812961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.812973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.813943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.813955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.814043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.814055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.814204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.814217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.814375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.814388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.814619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.814632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.814744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.814757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.814858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.814871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.815030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.815044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.815223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.815236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.815449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.815462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.815549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.815562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.815776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.815789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.815936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.815949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.816206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.816220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.816393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.816406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.816552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.816565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.816828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.816840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.817028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.817041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.404 [2024-06-10 12:18:04.817198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.404 [2024-06-10 12:18:04.817211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.404 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.817305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.817318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.817563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.817576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.817762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.817777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.817992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.818006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.818144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.818157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.818267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.818281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.818466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.818484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.818650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.818663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.818853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.818866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.819066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.819079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.819239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.819252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.819489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.819502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.819649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.819662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.819932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.819946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.820052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.820065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.820223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.820236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.820493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.820506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.820658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.820671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.820887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.820900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.821113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.821126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.821281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.821294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.821440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.821453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.821614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.821627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.821792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.821805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.821982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.821996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.822256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.822269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.822346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.822359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.822525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.822539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.822700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.822713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.822799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.822811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.822913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.822926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.823162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.823175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.405 [2024-06-10 12:18:04.823288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.405 [2024-06-10 12:18:04.823301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.405 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.823413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.823426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.823500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.823513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.823728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.823741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.823848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.823861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.824021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.824034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.824183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.824196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.824344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.824358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.824594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.824608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.824801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.824814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.824896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.824910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.825157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.825170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.825394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.825407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.825648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.825661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.825939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.825952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.826181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.826194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.826341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.826354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.826541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.826554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.826726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.826740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.826833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.826846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.826940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.826953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.827177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.827190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.827366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.827379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.827511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.827525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.827618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.827630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.827833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.827846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.827943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.827956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.828143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.406 [2024-06-10 12:18:04.828156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.406 qpair failed and we were unable to recover it. 00:29:15.406 [2024-06-10 12:18:04.828381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.828394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.828580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.828594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.828675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.828687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.828785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.828799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.829866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.829879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.830104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.830343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.830357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.830520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.830533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.830649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.830661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.830774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.830788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.830856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.830868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.831056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.831069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.831230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.831243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.831406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.831419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.831585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.831599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.831775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.831789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.831975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.831990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.832139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.832152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.832250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.832264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.832467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.832484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.832667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.832681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.832894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.832906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.833112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.833125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.833352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.833365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.833531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.833544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.833649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.833662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.833817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.833830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.833999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.834012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.407 [2024-06-10 12:18:04.834191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.407 [2024-06-10 12:18:04.834205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.407 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.834355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.834368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.834537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.834550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.834697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.834710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.834866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.834879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.835973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.835985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.836144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.836157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.836447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.836460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.836707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.836744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.836870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.836891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.837130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.837147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.837414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.837431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.837684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.837703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.837819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.837837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.838021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.838038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.838197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.838215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.838407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.838424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.838540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.838554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.838821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.838834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.838993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.839006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.839249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.839262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.839513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.839528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.839610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.839622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.839717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.839729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.839946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.839959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.840172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.840185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.840279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.840292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.840528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.408 [2024-06-10 12:18:04.840541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.408 qpair failed and we were unable to recover it. 00:29:15.408 [2024-06-10 12:18:04.840707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.840720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.840869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.840882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.840987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.841000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.841148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.841161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.841387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.841400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.841506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.841519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.841632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.841645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.841861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.841874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.842035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.842049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.842295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.842336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.842568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.842610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.409 [2024-06-10 12:18:04.842833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.409 [2024-06-10 12:18:04.842874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.409 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.843101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.843141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.843312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.843357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.843543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.843557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.843648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.843661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.843895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.843909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.844021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.844035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.844159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.844172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.844287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.844300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.844587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.844602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.844713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.844726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.844940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.844953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.845047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.845060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.845148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.845161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.694 [2024-06-10 12:18:04.845258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.694 [2024-06-10 12:18:04.845271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.694 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.845436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.845449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.845626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.845640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.845732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.845746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.845902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.845915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.846072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.846085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.846273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.846286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.846396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.846409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.846509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.846523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.846623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.846636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.846886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.846899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.847056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.847069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.847243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.847257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.847421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.847434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.847590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.847603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.847839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.847852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.848962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.848976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.849080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.849093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.849200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.849214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.849308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.849321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.849509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.849523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.849737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.849750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.849918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.849958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.850182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.850223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.850495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.850537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.850760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.850801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.850968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.851009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.851218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.851231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.851493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.851535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.851815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.851861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.695 [2024-06-10 12:18:04.852058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.695 [2024-06-10 12:18:04.852071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.695 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.852186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.852227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.852406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.852446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.852760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.852801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.853026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.853066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.853308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.853348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.853517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.853558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.853848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.853887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.854054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.854067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.854232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.854272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.854515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.854558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.854887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.854928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.855209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.855250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.855441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.855454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.855649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.855662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.855895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.855936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.856146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.856186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.856407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.856448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.856764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.856805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.857049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.857089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.857310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.857351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.857626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.857668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.857973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.858013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.858308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.858349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.858559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.858600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.858831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.858871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.859120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.859133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.859344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.859358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.859518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.859532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.859709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.859722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.859896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.859932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.860090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.860131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.860431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.860479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.860647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.860660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.860762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.860801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.861028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.861068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.861228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.696 [2024-06-10 12:18:04.861269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.696 qpair failed and we were unable to recover it. 00:29:15.696 [2024-06-10 12:18:04.861378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.861391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.861624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.861637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.861736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.861750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.861907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.861919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.862186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.862224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.862385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.862424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.862686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.862725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.862874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.862912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.863122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.863160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.863353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.863364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.863532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.863572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.863828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.863866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.864007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.864045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.864265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.864277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.864447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.864458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.864661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.864673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.864887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.864899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.864994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.865162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.865286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.865457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.865618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.865783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.865977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.865988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.866080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.866091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.866252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.866263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.866422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.866433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.866597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.866609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.866780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.866791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.867982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.867994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.868217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.697 [2024-06-10 12:18:04.868229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.697 qpair failed and we were unable to recover it. 00:29:15.697 [2024-06-10 12:18:04.868464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.868479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.868639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.868650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.868886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.868898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.869986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.869998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.870158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.870170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.870280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.870292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.870490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.870502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.870651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.870663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.870816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.870828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.871061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.871073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.871288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.871300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.871473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.871489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.871670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.871681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.871964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.871975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.872150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.872162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.872342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.872353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.872443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.872455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.872541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.872552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.872632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.872649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.872813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.872825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.873978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.873990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.874069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.698 [2024-06-10 12:18:04.874081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.698 qpair failed and we were unable to recover it. 00:29:15.698 [2024-06-10 12:18:04.874241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.874252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.874419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.874431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.874643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.874655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.874760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.874772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.875010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.875022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.875235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.875247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.875412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.875423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.875624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.875636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.875892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.875904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.876916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.876928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.877097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.877109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.877340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.877352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.877439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.877451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.877623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.877636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.877736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.877748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.877953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.877965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.878059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.878071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.878216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.878228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.878334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.878345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.878447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.878459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.878698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.699 [2024-06-10 12:18:04.878711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.699 qpair failed and we were unable to recover it. 00:29:15.699 [2024-06-10 12:18:04.878849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.878861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.878995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.879007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.879268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.879280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.879461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.879473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.879668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.879681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.879893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.879905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.879985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.879997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.880109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.880121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.880276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.880288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.880400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.880412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.880575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.880588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.880765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.880777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.880840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.880852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.881067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.881079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.881269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.881281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.881556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.881568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.881751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.881763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.881937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.881949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.882057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.882234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.882394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.882542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.882652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.882828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.882990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.883142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.883246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.883374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.883655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.883819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.883939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.883950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.884105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.884117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.884234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.884246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.884474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.884492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.884667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.884679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.700 [2024-06-10 12:18:04.884833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.700 [2024-06-10 12:18:04.884845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.700 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.885042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.885054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.885147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.885159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.885333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.885345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.885581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.885594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.885743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.885755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.885922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.885934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.886042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.886054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.886150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.886162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.886241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.886253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.886493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.886506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.886656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.886669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.886833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.886845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.887069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.887081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.887199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.887211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.887300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.887311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.887460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.887472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.887695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.887707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.887853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.887865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.888973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.888985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.889246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.889258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.889345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.889357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.889523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.889538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.889633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.889645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.889800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.889811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.889909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.889922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.890099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.890112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.701 [2024-06-10 12:18:04.890274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.701 [2024-06-10 12:18:04.890286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.701 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.890494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.890506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.890659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.890671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.890883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.890895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.891129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.891141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.891302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.891313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.891449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.891461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.891724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.891737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.891893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.891905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.892121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.892132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.892252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.892264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.892423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.892435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.892583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.892595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.892779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.892791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.892951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.892963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.893079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.893091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.893253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.893266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.893355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.893367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.893460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.893473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.893653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.893666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.893834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.893846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.894084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.894096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.894359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.894372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.894632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.894644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.894806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.894818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.895031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.895043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.895253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.895265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.895359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.895371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.895519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.895532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.895682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.895694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.895883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.895895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.896070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.896082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.896233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.896244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.896410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.896422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.896595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.896607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.896754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.702 [2024-06-10 12:18:04.896769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-06-10 12:18:04.896868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.896880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.896997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.897177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.897305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.897406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.897632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.897808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.897938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.897950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.898054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.898161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.898335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.898513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.898608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.898842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.898989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.899097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.899207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.899369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.899649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.899745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.899926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.899938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.900931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.900943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.901141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.901154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.901301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.901312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.901464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.901480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.901594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.901606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.901771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.901783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.901944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.901956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.902131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.902143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.902313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.902324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-06-10 12:18:04.902481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.703 [2024-06-10 12:18:04.902494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.902589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.902601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.902698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.902710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.902884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.902897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.903933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.903945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.904094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.904106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.904273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.904286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.904386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.904398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.904552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.904564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.904775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.904788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.905931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.905943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.906104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.906116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.906281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.906292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.906508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.906521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.906669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.704 [2024-06-10 12:18:04.906681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-06-10 12:18:04.906835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.906847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.906994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.907092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.907251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.907540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.907664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.907784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.907901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.907918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.908099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.908115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.908283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.908296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.908473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.908498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.908591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.908603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.908760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.908772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.908949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.908961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.909051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.909063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.909289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.909301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.909411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.909426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.909573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.909586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.909736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.909748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.909902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.909915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.910015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.910190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.910284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.910462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.910559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.910739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.910995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.911007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.911209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.911221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.911387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.911399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.911484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.911497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.911760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.911772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.705 [2024-06-10 12:18:04.911945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.705 [2024-06-10 12:18:04.911958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.705 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.912876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.912888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.913129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.913141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.913302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.913314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.913485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.913498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.913663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.913675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.913833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.913845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.913928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.913941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.914953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.914965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.915973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.915985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.916134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.916146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.916313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.916325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.916394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.916406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.916501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.916514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.916665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.916676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.916789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.916802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.917041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.706 [2024-06-10 12:18:04.917054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.706 qpair failed and we were unable to recover it. 00:29:15.706 [2024-06-10 12:18:04.917136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.917148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.917303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.917315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.917414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.917426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.917661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.917673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.917819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.917831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.918009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.918022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.918101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.918113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.918260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.918272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.918488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.918501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.918655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.918667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.918927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.918939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.919159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.919171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.919273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.919286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.919444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.919455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.919610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.919622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.919785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.919798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.920806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.920818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.921937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.921949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.922935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.707 [2024-06-10 12:18:04.922947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.707 qpair failed and we were unable to recover it. 00:29:15.707 [2024-06-10 12:18:04.923025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.923119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.923279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.923386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.923617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.923797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.923954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.923966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.924974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.924987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.925159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.925171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.925336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.925348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.925497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.925509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.925586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.925598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.925837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.925857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.926030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.926047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.926204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.926221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.926421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.926437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.926567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.926584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.926742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.926758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.926979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.926996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.927171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.927188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.927313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.927329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.927522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.927535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.927693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.927705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.927867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.927879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.927962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.927974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.708 qpair failed and we were unable to recover it. 00:29:15.708 [2024-06-10 12:18:04.928949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.708 [2024-06-10 12:18:04.928962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.929062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.929074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.929222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.929233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.929386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.929398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.929547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.929559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.929705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.929717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.929903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.929916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.930007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.930019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.930184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.930196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.930285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.930297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.930483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.930502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.930663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.930675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.930859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.930871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.931029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.931041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.931189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.931201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.931373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.931385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.931569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.931581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.931730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.931742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.931908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.931920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.932080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.932092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.932250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.932262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.932412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.932426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.932591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.932604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.932798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.932810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.932971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.932984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.933198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.933211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.933444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.933456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.933629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.933641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.933791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.933804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.933889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.933901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.934051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.709 [2024-06-10 12:18:04.934063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.709 qpair failed and we were unable to recover it. 00:29:15.709 [2024-06-10 12:18:04.934277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.934289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.934483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.934495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.934583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.934596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.934706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.934718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.934950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.934962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.935954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.935966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.936133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.936145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.936235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.936247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.936353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.936365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.936638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.936651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.936725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.936737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.936897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.936909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.937103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.937115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.937218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.937231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.937384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.937396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.937615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.937627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.937736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.937748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.937881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.937893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.938061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.938073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.938259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.938272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.938528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.938540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.938765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.938777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.938868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.938881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.939042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.939056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.939145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.939157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.939339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.939352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.939599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 12:18:04.939640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.710 qpair failed and we were unable to recover it. 00:29:15.710 [2024-06-10 12:18:04.939860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.939900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.940209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.940250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.940383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.940395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.940566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.940578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.940727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.940739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.940994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.941034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.941254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.941266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.941495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.941537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.941833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.941872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.942094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.942133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.942422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.942435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.942596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.942608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.942853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.942893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.943057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.943097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.943374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.943413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.943750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.943792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.944085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.944126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.944427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.944466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.944668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.944708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.944932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.944972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.945246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.945286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.945583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.945624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.945928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.945969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.946201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.946241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.946402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.946441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.946649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.946690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.946846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.946886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.947179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.947218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.947457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.947524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.947800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.947840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.948062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.948102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.948265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 12:18:04.948305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-06-10 12:18:04.948549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.948561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.948663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.948690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.948948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.948988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.949213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.949252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.949502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.949516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.949667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.949679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.949865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.949905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.950180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.950220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.950544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.950585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.950838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.950878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.951134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.951174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.951325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.951365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.951612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.951624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.951738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.951750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.951967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.951978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.952169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.952208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.952460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.952509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.952720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.952761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.952989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.953030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.953316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.953356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.953636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.953678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.953901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.953941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.954175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.954214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.954494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.954536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.954753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.954793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.955014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.955053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.955283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.955323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.955650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.955691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.955878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.955890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.956049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.956060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.956225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.956264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.956415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.956455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.956749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.956790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.957042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.957081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.957378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.712 [2024-06-10 12:18:04.957418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-06-10 12:18:04.957599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.957611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.957802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.957842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.958121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.958161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.958305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.958317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.958499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.958511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.958608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.958619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.958790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.958802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.958984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.959023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.959330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.959369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.959548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.959562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.959761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.959801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.960085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.960125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.960416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.960428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.960620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.960632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.960852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.960892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.961039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.961079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.961283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.961322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.961570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.961582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.961774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.961786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.961999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.962010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.962208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.962248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.962539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.962581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.962828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.962840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.963039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.963080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.963361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.963402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.963659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.963671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.963851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.963863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.964015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.964027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.964200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.964212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.964335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.964347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.964585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.964598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.964689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.964701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.964810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.964822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.965057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.965069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.965307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.965318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.965528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.965540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.965715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.965755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-06-10 12:18:04.966051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.713 [2024-06-10 12:18:04.966091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.966324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.966335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.966514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.966555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.966720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.966759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.966978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.967018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.967229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.967269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.967523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.967564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.967842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.967854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.968101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.968113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.968271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.968282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.968455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.968506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.968807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.968848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.969128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.969178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.969307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.969319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.969427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.969439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.969652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.969664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.969822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.969833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.969959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.969999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.970208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.970248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.970566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.970606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.970863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.970903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.971114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.971153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.971405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.971445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.971733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.971746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.971902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.971913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.972132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.972172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.972393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.972434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.972598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.972610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.972767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.714 [2024-06-10 12:18:04.972780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-06-10 12:18:04.972973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.972985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.973156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.973195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.973414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.973453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.973718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.973730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.973977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.973989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.974134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.974145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.974381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.974393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.974558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.974570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.974746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.974785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.975076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.975116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.975333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.975373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.975593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.975633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.975907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.975919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.976008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.976019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.976122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.976133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.976399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.976627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.976669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.976877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.976918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.977146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.977186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.977408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.977447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.977700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.977741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.978017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.978058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.978403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.978443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.978734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.978779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.978994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.979035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.979280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.979319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.979556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.979597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.979843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.979884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.980185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.980224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.980397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.980436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.980562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.980574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.980800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.980840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.981004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.981044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.981366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.715 [2024-06-10 12:18:04.981407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.715 qpair failed and we were unable to recover it. 00:29:15.715 [2024-06-10 12:18:04.981630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.981670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.981954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.981994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.982224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.982264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.982568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.982609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.982885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.982925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.983140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.983181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.983451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.983463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.983656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.983668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.983846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.983858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.984018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.984029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.984196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.984207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.984293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.984322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.984597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.984639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.984853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.984893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.985110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.985150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.985305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.985316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.985571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.985583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.985683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.985695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.985785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.985796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.985952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.985991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.986239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.986280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.986497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.986539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.986762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.986802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.987105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.987145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.987349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.987389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.987631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.987672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.987977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.988018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.988161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.988200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.988502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.988543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.988764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.988778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.988894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.988905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.989140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.989152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.989257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-06-10 12:18:04.989269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-06-10 12:18:04.989437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.989449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.989558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.989570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.989716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.989768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.989945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.989984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.990204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.990243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.990518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.990530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.990698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.990710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.990795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.990807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.990920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.990932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.991080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.991091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.991340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.991381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.991703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.991744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.991963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.992003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.992213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.992259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.992424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.992435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.992600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.992642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.992939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.992979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.993214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.993254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.993423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.993463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.993682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.993722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.993940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.993981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.994305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.994344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.994565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.994605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.994805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.994817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.994922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.994934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.995043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.995055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.995270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.995282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.995452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.995463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.995563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.995575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.995742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.995790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.996011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.996051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.996192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.996231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.996369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.996382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.996541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.996553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.996648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.996659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-06-10 12:18:04.996832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-06-10 12:18:04.996845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.996990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.997003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.997096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.997108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.997225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.997237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.997458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.997510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.997668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.997707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.998002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.998014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.998176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.998187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.998280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.998291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.998448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.998500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.998643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.998683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.998882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.998921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.999159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.999200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.999410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.999449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.999645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.999657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:04.999880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:04.999921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.000146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.000186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.000432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.000472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.000754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.000795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.000973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.001013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.001285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.001325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.001510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.001523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.001726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.001738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.001897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.001908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.002081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.002121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.002379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.002418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.002602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.002642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.002767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.002779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.002950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.002962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.003113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.003124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.003206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.003218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.003365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.003377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.003533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.003573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.003790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.003831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.003979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.004019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.004269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.004309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.004609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.004651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-06-10 12:18:05.004902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-06-10 12:18:05.004914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.005073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.005085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.005296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.005337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.005556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.005598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.005812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.005858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.006030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.006070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.006304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.006344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.006565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.006607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.006758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.006799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.007008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.007047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.007299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.007338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.007570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.007612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.007749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.007760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.007992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.008032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.008185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.008226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.008439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.008487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.008634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.008646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.008810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.008822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.009868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.009908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.010075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.010116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.010327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.010366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.010545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.719 [2024-06-10 12:18:05.010557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-06-10 12:18:05.010724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.010766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.010922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.010961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.011174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.011214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.011432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.011444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.011591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.011644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.011852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.011891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.012147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.012187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.012396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.012436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.012745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.012757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.012918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.012930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.013022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.013034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.013116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.013128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.013277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.013290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.013551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.013583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.013803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.013843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.014070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.014121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.014278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.014317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.014571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.014612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.014887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.014900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.015115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.015127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.015220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.015232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.015323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.015335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.015442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.015504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.015678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.015718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.015861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.015902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.016034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.016073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.016224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.016264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.016485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.016527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.016659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.016670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.016836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.016848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.016945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.016956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.017033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.017045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.017151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.017163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.017343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.017355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.017432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.017444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.017611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.017623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-06-10 12:18:05.017772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.720 [2024-06-10 12:18:05.017785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.017954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.017994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.018169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.018210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.018378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.018418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.018571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.018583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.018774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.018786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.019027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.019039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.019186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.019198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.019431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.019443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.019617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.019659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.019892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.019932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.020094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.020133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.020311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.020323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.020468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.020484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.020589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.020618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.020922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.020963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.021248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.021288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.021528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.021576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.021666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.021678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.021892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.021905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.022051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.022063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.022206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.022218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.022307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.022319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.022483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.022495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.022589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.022638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.022881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.022920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.023128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.023167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.023318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.023359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.023582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.023623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.023899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.023911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.024070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.024082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.024227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.024239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.024332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.024343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.024517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.024559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-06-10 12:18:05.024799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.721 [2024-06-10 12:18:05.024839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.025010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.025050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.025303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.025343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.025502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.025542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.025794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.025806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.025899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.025911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.026065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.026077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.026172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.026184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.026441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.026490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.026657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.026697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.026858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.026898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.027030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.027071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.027278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.027355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.027675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.027720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.027852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.027869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.028141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.028158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.028327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.028368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.028574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.028616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.028918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.028958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.029237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.029277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.029503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.029521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.029616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.029630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.029723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.029735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.029898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.029910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.030097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.030109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.030270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.030282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.030510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.030552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.030772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.030812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.031112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.031152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.031292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.031333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.031561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.031601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.031900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.031911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.031997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.032009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.032142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.032154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.032389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.032401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.032499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.722 [2024-06-10 12:18:05.032512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.722 qpair failed and we were unable to recover it. 00:29:15.722 [2024-06-10 12:18:05.032592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.032604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.032761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.032800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.033078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.033120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.033268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.033308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.033449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.033501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.033672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.033713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.033902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.033914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.034013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.034025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.034238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.034279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.034447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.034498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.034805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.034821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.034959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.034971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.035141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.035180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.035431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.035471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.035678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.035690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.035911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.035951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.036199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.036244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.036543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.036585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.036742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.036782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.036923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.036962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.037196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.037238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.037472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.037523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.037725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.037737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.037907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.037948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.038184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.038224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.038432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.038472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.038785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.038826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.039050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.039090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.039387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.039427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.039720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.039762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.040023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.040062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.040272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.040311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.040535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.723 [2024-06-10 12:18:05.040547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.723 qpair failed and we were unable to recover it. 00:29:15.723 [2024-06-10 12:18:05.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.040738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.040988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.041028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.041254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.041294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.041522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.041563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.041775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.041815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.042129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.042169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.042392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.042432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.042675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.042715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.042947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.042988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.043263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.043303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.043653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.043694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.043998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.044039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.044356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.044396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.044690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.044731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.045022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.045035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.045201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.045241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.045417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.045457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.045705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.045747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.045959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.045999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.046192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.046232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.046454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.046505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.046704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.046745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.046967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.046979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.047127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.047182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.047467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.047519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.047633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.047645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.047847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.047887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.048164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.048204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.724 qpair failed and we were unable to recover it. 00:29:15.724 [2024-06-10 12:18:05.048363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.724 [2024-06-10 12:18:05.048404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.048698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.048710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.048952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.048991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.049278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.049318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.049539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.049552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.049667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.049679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.049837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.049849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.050095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.050136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.050346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.050386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.050613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.050625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.050724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.050736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.050951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.050963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.051044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.051056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.051293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.051305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.051494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.051506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.051611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.051650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.051907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.051948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.052177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.052216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.052368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.052408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.052747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.052759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.052972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.052984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.053222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.053233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.053404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.053416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.053631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.053643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.053812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.053858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.054069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.054111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.054388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.054428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.054571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.054583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.054731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.054743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.054817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.054829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.055054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.055094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.055374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.055414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.055731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.055772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.725 qpair failed and we were unable to recover it. 00:29:15.725 [2024-06-10 12:18:05.055932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.725 [2024-06-10 12:18:05.055972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.056226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.056266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.056567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.056613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.056891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.056931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.057162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.057203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.057502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.057543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.057755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.057795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.057928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.057940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.058104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.058116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.058301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.058340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.058564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.058605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.058829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.058841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.059057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.059068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.059307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.059319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.059406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.059418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.059574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.059586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.059612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89ab50 (9): Bad file descriptor 00:29:15.726 [2024-06-10 12:18:05.059761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.059785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.060040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.060057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.060249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.060290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.060530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.060572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.060814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.060854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.061084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.061125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.061350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.061390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.061624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.061642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.061813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.061830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.062006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.062022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.062190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.062230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.062538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.062580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.062747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.062787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.063011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.063027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.063286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.063326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.063470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.063519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.063843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.063883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.064097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.064138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.064365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.064405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.064638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.726 [2024-06-10 12:18:05.064655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.726 qpair failed and we were unable to recover it. 00:29:15.726 [2024-06-10 12:18:05.064777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.064817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.064955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.064995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.065217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.065256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.065498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.065540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.065690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.065730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.065950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.065966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.066079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.066117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.066401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.066440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.066581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.066621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.066842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.066859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.067133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.067173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.067469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.067516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.067764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.067780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.067892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.067908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.068084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.068123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.068427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.068467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.068724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.068741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.068969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.069009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.069171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.069211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.069450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.069501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.069885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.069962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.070313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.070358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.070515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.070532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.070732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.070749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.070945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.070962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.071085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.071102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.071311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.071351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.071660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.071703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.072018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.072034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.072247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.072287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.072451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.072505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.072718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.072758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.073024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.073055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.073217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.073266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.073498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.073539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.727 [2024-06-10 12:18:05.073813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.727 [2024-06-10 12:18:05.073829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.727 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.074005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.074021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.074188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.074227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.074449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.074497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.074691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.074707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.074813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.074829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.075004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.075020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.075154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.075194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.075470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.075521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.075661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.075677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.075929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.075971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.076198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.076238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.076409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.076449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.076686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.076727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.076865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.076905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.077170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.077186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.077335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.077376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.077597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.077639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.077890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.077930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.078212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.078252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.078460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.078510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.078800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.078816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.079060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.079076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.079360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.079400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.079648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.079689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.079922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.079963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.080212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.080252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.080551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.080592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.080889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.080929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.081090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.081106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.081283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.081322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.081547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.081589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.081813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.081853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.082127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.082167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.082465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.082511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.082604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.082620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.082789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.728 [2024-06-10 12:18:05.082829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.728 qpair failed and we were unable to recover it. 00:29:15.728 [2024-06-10 12:18:05.083041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.083081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.083298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.083343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.083641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.083681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.083877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.083894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.083983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.083999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.084182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.084198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.084420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.084436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.084605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.084621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.084877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.084917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.085169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.085211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.085520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.085574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.085760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.085777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.085975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.085992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.086233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.086249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.086332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.086348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.086535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.086577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.086752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.086792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.087013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.087057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.087168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.087185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.087361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.087401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.088956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.088984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.089268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.089285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.089515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.089533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.089754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.089771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.089981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.089997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.090120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.090136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.090332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.090372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.090600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.090642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.090902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.090919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.091158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.091198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.091435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.091475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.729 [2024-06-10 12:18:05.091705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.729 [2024-06-10 12:18:05.091721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.729 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.092121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.092144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.092312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.092329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.092500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.092517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.092694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.092734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.093030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.093070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.093237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.093276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.093556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.093597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.093850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.093891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.094116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.094155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.094300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.094346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.094529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.094546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.094808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.094858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.095064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.095104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.095320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.095361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.095658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.095698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.095896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.095912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.096166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.096206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.096435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.096475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.096625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.096641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.096815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.096855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.097016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.097055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.097333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.097374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.097579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.097621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.097932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.097972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.098199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.098216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.098326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.098343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.098444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.098460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.098641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.098677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.098859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.098902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.099130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.099172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.099400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.099440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.099694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.099711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.099873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.099913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.730 qpair failed and we were unable to recover it. 00:29:15.730 [2024-06-10 12:18:05.100072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.730 [2024-06-10 12:18:05.100112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.100284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.100324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.100600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.100641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.100867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.100887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.101019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.101059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.101295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.101335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.101554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.101594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.101836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.101852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.101962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.101979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.102067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.102083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.102184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.102200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.102430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.102470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.102652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.102692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.103005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.103045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.103260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.103299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.103471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.103520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.103734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.103775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.104014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.104054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.104313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.104353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.104577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.104608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.104829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.104846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.105005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.105021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.105141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.105181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.105432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.105471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.105716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.105755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.106063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.106102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.106325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.106365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.106589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.106630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.106810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.106827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.107008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.107048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.107204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.107245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.107466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.107515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.107690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.107706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.731 [2024-06-10 12:18:05.107824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.731 [2024-06-10 12:18:05.107840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.731 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.108027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.108043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.108297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.108336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.108507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.108548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.108720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.108737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.108998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.109037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.109366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.109405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.109643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.109685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.109909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.109925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.110106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.110145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.110361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.110401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.110640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.110681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.110915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.110954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.111205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.111246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.111423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.111464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.111737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.111777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.111984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.112000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.112132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.112171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.112468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.112518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.112678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.112694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.112941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.112971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.113199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.113239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.113399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.113440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.113618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.113659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.113965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.114005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.114171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.114212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.114427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.114467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.114726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.114743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.114991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.115032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.115188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.115228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.115502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.115543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.115771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.115812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.116038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.116163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.116346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.116598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.116761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.116884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.116999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.117018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.732 [2024-06-10 12:18:05.117213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.732 [2024-06-10 12:18:05.117230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.732 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.117335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.117352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.117523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.117540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.117691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.117707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.117900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.117917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.118095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.118135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.118429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.118469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.118775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.118816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.119023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.119039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.119211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.119251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.119402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.119442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.119725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.119766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.120045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.120085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.120322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.120363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.120582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.120623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.120845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.120886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.121110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.121127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.121307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.121348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.121502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.121543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.121734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.121776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.122003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.122019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.122175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.122191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.122381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.122421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.122645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.122686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.122889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.122905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.123105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.123145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.123397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.123437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.123679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.123720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.124003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.124020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.124124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.124140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.124296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.733 [2024-06-10 12:18:05.124312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.733 qpair failed and we were unable to recover it. 00:29:15.733 [2024-06-10 12:18:05.124404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.124421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.124580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.124597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.124847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.124882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.125145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.125186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.125426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.125465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.125713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.125754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.125968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.126009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.126233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.126249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.126403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.126420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.126644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.126679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.126822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.126861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.127087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.127126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.127360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.127401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.127704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.127745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.127897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.127913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.128074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.128090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.128276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.128292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.128401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.128417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.128641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.128682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.128850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.128889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.129110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.129150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.129413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.129454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.129631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.129672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.129820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.129861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.130159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.130175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.130334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.130374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.130586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.130626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.130804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.130845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.131068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.131085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.131263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.131280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.131420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.131461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.734 [2024-06-10 12:18:05.131762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.734 [2024-06-10 12:18:05.131803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.734 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.132014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.132030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.132141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.132177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.132486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.132527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.132672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.132712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.132901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.132923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.133027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.133067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.133231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.133271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.133430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.133471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.133742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.133783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.133929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.133968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.134175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.134192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.134398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.134438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.134594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.134634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.134890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.134931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.135096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.135112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.135283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.135323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.135539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.135580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.135808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.135850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.136029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.136046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.136230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.136247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.136363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.136380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.136656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.136698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.136929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.136970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.137193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.137233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.137404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.137444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.137622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.137664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.137874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.137891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.138079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.138119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.138282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.138322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.138619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.138679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.138868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.138884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.139150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.735 [2024-06-10 12:18:05.139190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.735 qpair failed and we were unable to recover it. 00:29:15.735 [2024-06-10 12:18:05.139438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.139489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.139703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.139746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.139951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.139968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.140068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.140084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.140274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.140314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.140618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.140660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.140833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.140873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.141164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.141204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.141472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.141521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2384067 Killed "${NVMF_APP[@]}" "$@" 00:29:15.736 [2024-06-10 12:18:05.141802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.141841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.142052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.142092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.142282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.142299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.142471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.142492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:15.736 [2024-06-10 12:18:05.142589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.142606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.142847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.142864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:15.736 [2024-06-10 12:18:05.143038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.143055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:15.736 [2024-06-10 12:18:05.143264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.143281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.143402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.143426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:15.736 [2024-06-10 12:18:05.143608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.143625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.736 [2024-06-10 12:18:05.143848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.143865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.144022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.144043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.144229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.144245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.144335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.144351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.144552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.144569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.144705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.144724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.144900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.144917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.145166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.145182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.145279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.145296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.145405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.145421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.145530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.145547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.145629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.145645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.145870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.145886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.146062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.146078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.146305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.146321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.146411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.146428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.146593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.146610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.736 qpair failed and we were unable to recover it. 00:29:15.736 [2024-06-10 12:18:05.146713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.736 [2024-06-10 12:18:05.146736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.146921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.146937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.147108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.147125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.147355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.147371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.147544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.147561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.147666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.147683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.147793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.147810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.147904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.147920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.148920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.148939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.149166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.149183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.149364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.149380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.149566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.149582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.149702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.149719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.149833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.149850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.149938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.149954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.150082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.150099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.150241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.150257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.150412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.150429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.150654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.150671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.150862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.150878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.151038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.151054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.151215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.151232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.151422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.151459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.151656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.151675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.151797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.151814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.151896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.151912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.152022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.152038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.152134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.152150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.152331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.152348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.152452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.152468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.152704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.152722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2385338 00:29:15.737 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:15.737 [2024-06-10 12:18:05.152916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.152935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.153030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.153046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.737 [2024-06-10 12:18:05.153152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.737 [2024-06-10 12:18:05.153170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.737 qpair failed and we were unable to recover it. 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2385338 00:29:15.738 [2024-06-10 12:18:05.153331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.153363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.153453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.153469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.153584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.153600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.153695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.153711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.153864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.153881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.154034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2385338 ']' 00:29:15.738 [2024-06-10 12:18:05.154050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.154220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.154236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.154458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.154482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.738 [2024-06-10 12:18:05.154590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.154608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.154776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.154793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.154894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.154911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:15.738 [2024-06-10 12:18:05.155034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.155052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.155235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.155253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.155484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.155501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.738 [2024-06-10 12:18:05.155675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.155692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.155816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.155832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.155958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.155974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:15.738 [2024-06-10 12:18:05.156188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.156205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.156429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.156446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 12:18:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.738 [2024-06-10 12:18:05.156568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.156584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.156754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.156770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.156960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.156977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.157200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.157216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.157331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.157349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.157456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.157472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.157591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.157620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.157714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.157730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.157890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.738 [2024-06-10 12:18:05.157907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.738 qpair failed and we were unable to recover it. 00:29:15.738 [2024-06-10 12:18:05.158069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.158086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.158205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.158221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.158335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.158351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.158575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.158592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.158700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.158719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.158973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.158989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.159093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.159110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.159280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.159296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.159402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.159418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.159624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.159643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.159801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.159818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.159968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.159985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.160159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.160175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.160333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.160349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.160531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.160549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.160710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.160727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.160890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.160906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.161145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.161161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.161340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.161357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.161630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.161648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.161838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.161855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.161973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.161989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.162159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.162176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.162381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.162399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.162514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.162531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.162636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.162653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.162818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.162835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.163068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.163085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.163182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.163198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.163322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.163339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.163515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.163533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.163759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.163776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.163882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.163898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.164102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.164118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.164214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.164231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.164376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.164395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.739 qpair failed and we were unable to recover it. 00:29:15.739 [2024-06-10 12:18:05.164572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.739 [2024-06-10 12:18:05.164592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.164799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.164816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.164924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.164940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.165136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.165154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.165329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.165346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.165533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.165550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.165793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.165810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.165986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.166163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.166336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.166513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.166688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.166829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.166951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.166968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.167094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.167111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.167300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.167317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.167420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.167438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.167606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.167623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.167796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.167813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.167981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.167999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.168225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.168243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.168353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.168369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.168546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.168564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.168680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.168697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.168871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.168887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.169057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.169075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.169180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.169198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.169315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.169349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.169581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.169602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.169771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.169788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.169947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.169964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.170072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.170089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.170254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.170271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.170440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.170457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.170633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.170650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.170809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.170826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.170999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.171015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.171207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.171224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.171392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.171409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.171592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.171609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.171725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.171747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.171839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.171856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.172013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.172029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.172262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.172279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.740 qpair failed and we were unable to recover it. 00:29:15.740 [2024-06-10 12:18:05.172450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.740 [2024-06-10 12:18:05.172469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.172661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.172677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.172859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.172876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.173059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.173076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.173198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.173215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.173438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.173455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.173616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.173632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.173789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.173805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.173964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.173981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.174148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.174165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.174407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.174424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.174598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.174615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.174715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.174731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.174887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.174903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.175063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.175079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.175188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.175204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.175374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.175390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.175574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.175591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.175690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.175706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.175870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.175886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.176112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.176128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.176353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.176369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.176550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.176567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.176760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.176776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.176870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.176886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.177980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.177996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.178110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.178126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.178284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.178299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.178469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.178491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.178678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.178694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.178879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.178895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.179027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.179043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.179167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.179183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.179436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.179452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.179613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.179630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.179796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.179812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.179982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.179998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.180221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.180237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.180395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.741 [2024-06-10 12:18:05.180411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.741 qpair failed and we were unable to recover it. 00:29:15.741 [2024-06-10 12:18:05.180580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.180596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.180666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.180682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.180850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.180867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.181038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.181054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.181277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.181293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.181472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.181492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.181753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.181769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.181993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.182009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.182256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.182272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.182362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.182378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.182623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.182640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.182753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.182769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.182886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.182908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.183050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.183074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.183196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.183213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.183438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.183455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.183637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.183654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.183850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.183867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.184942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.184958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.185145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.185165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.185338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.185355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.185608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.185625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.185849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.185865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.185982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.185998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.186109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.186228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.186360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.186566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.186689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.186817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.186991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.187011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:15.742 [2024-06-10 12:18:05.187111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.742 [2024-06-10 12:18:05.187127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:15.742 qpair failed and we were unable to recover it. 00:29:16.026 [2024-06-10 12:18:05.187352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.026 [2024-06-10 12:18:05.187369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.026 qpair failed and we were unable to recover it. 00:29:16.026 [2024-06-10 12:18:05.187612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.026 [2024-06-10 12:18:05.187629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.026 qpair failed and we were unable to recover it. 00:29:16.026 [2024-06-10 12:18:05.187741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.187758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.187852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.187869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.188013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.188029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.188142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.188158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.188384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.188401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.188655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.188672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.188791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.188808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.189902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.189919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.190015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.190031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.190186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.190203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.190306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.190323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.190569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.190586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.190753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.190772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.190940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.190956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.191127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.191143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.191442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.191459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.191702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.191719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.191902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.191919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.192011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.192026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.192299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.192315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.192594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.192611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.192855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.192871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.193019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.193035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.193211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.193227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.193396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.193412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.193588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.193605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.193752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.193769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.193941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.193957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.194124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.194140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.194241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.194257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.194425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.194441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.194604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.027 [2024-06-10 12:18:05.194621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.027 qpair failed and we were unable to recover it. 00:29:16.027 [2024-06-10 12:18:05.194788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.194804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.194945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.194961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.195067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.195084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.195191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.195207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.195316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.195332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.195484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.195501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.195684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.195700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.195845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.195862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.196040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.196057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.196243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.196259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.196414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.196430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.196602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.196619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.196793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.196809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.196988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.197099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.197203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.197391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.197535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.197729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.197926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.197942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.198053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.198072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.198318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.198335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.198503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.198519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.198644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.198661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.198869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.198885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.199083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.199270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.199387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.199496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.199677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.199891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.199998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.200014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.200239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.200256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.200487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.200504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.200681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.200697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.200858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.200874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.201031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.201047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.201294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.028 [2024-06-10 12:18:05.201310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.028 qpair failed and we were unable to recover it. 00:29:16.028 [2024-06-10 12:18:05.201417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.201433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.201653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.201670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.201940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.201956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.202194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.202210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.202376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.202392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.202584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.202601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.202677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.202693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.202792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.202808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.203058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.203076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.203232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.203249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.203356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.203372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.203499] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:16.029 [2024-06-10 12:18:05.203524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.203543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 [2024-06-10 12:18:05.203552] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.203704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.203721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.203895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.203911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.204031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.204047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.204143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.204160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.204272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.204288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.204524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.204541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.204814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.204830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.205009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.205026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.205200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.205216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.205315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.205334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.205439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.205456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.205572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.205589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.205863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.205879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.206037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.206054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.206214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.206230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.206403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.206420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.206524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.206541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.206722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.206738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.206813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.206830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.207079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.207097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.207200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.207217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.207448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.207465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.207656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.207676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.207846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.207863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.208032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.208049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.029 [2024-06-10 12:18:05.208165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.029 [2024-06-10 12:18:05.208182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.029 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.208282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.208298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.208545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.208563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.208813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.208829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.209077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.209093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.209268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.209285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.209535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.209552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.209726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.209743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.209924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.209940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.210110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.210126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.210284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.210301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.210474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.210497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.210657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.210673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.210847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.210863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.211061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.211077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.211259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.211276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.211445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.211462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.211650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.211666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.211824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.211841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.212002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.212019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.212109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.212124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.212376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.212393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.212545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.212562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.212720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.212736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.212925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.212942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.213213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.213230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.213388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.213405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.213630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.213647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.213735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.213751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.213897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.213913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.214107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.214124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.214226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.214242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.214468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.214489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.214665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.214681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.214791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.214807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.215031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.215048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.215218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.215234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.215404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.215420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.215497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.030 [2024-06-10 12:18:05.215514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.030 qpair failed and we were unable to recover it. 00:29:16.030 [2024-06-10 12:18:05.215683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.215699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.215843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.215859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.216104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.216120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.216221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.216237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.216464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.216505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.216650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.216666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.216789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.216806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.216978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.216994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.217164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.217180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.217366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.217382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.217592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.217609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.217766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.217781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.217948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.217964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.218093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.218109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.218332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.218349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.218519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.218536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.218713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.218729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.218890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.218907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.219176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.219193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.219362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.219379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.219627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.219644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.219761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.219777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.219972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.219989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.220832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.220862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.221124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.221142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.221416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.221437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.221692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.221709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.221868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.221885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.222052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.222070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.222175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.222191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.031 [2024-06-10 12:18:05.222319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.031 [2024-06-10 12:18:05.222336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.031 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.222496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.222513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.222756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.222773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.222867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.222883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.223056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.223074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.223192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.223210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.223346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.223364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.223537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.223554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.224051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.224077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.224342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.224360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.224518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.224535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.224823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.224839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.224997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.225014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.225131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.225147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.225320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.225336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.225455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.225471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.225671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.225688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.225879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.225895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.226918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.226935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.227048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.227065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.227228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.227245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.227372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.227388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.227610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.227627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.227733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.227749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.227919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.227935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.228056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.228073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.228162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.228177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.228379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.228396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.228617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.228634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.228866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.228886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.229045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.229062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.229276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.229292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.229383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.229399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.032 qpair failed and we were unable to recover it. 00:29:16.032 [2024-06-10 12:18:05.229568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.032 [2024-06-10 12:18:05.229586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.229685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.229701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.229899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.229916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.230078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.230095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.230279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.230296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.230401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.230417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.230588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.230606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.230771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.230787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.230953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.230970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.231160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.231177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.231337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.231354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.231518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.231535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.231628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.231645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.231939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.231956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.232122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.232138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.232367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.232383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.232472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.232492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.232604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.232621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.232783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.232801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.232905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.232923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.233152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.233168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.233300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.233316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.233423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.233440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.233711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.233729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.233839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.233855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.234109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.234127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.234277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.234293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.234421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.234438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.234551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.234568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.234744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.234761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.234852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.234869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.235028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.235045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.235210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.235227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.235394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.235410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.235568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.235586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.235677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.235692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.235871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.235891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.236011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.236028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.033 [2024-06-10 12:18:05.236144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.033 [2024-06-10 12:18:05.236160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.033 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.236319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.236336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.236443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.236459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.236557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.236574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.236734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.236750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.236842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.236860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.237003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.237019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.237123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.237139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.237329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.237345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.237515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.237532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.237624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.237640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.237814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.237831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.238928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.238944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.239122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.239226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.239409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.239528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.239711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.239883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.239994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.240119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.240334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.240452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.240578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.240792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.240914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.240930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.241098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.241114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.241234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.241251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.034 [2024-06-10 12:18:05.241433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.241451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.241636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.241652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.241773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.241789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.241901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.241918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.242049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.034 [2024-06-10 12:18:05.242074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.034 qpair failed and we were unable to recover it. 00:29:16.034 [2024-06-10 12:18:05.242304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.242321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.242575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.242594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.242704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.242721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.242827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.242843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.243110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.243127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.243218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.243234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.243412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.243428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.243725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.243743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.243855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.243871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.243976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.243992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.244106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.244123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.244307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.244323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.244430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.244446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.244616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.244634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.244743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.244760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.244955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.244972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.245086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.245102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.245214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.245230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.245480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.245497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.245672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.245689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.245882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.245898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.246074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.246090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.246254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.246271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.246449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.246465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.246642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.246659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.246821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.246838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.246939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.246958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.247135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.247152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.247375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.247391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.247589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.247605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.247698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.247714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.247825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.247841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.247920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.247937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.248033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.248050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.248209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.248226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.248381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.248398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.248540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.035 [2024-06-10 12:18:05.248557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.035 qpair failed and we were unable to recover it. 00:29:16.035 [2024-06-10 12:18:05.248676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.248693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.248774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.248791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.249029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.249287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.249408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.249525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.249652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.249824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.249984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.250960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.250977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.251244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.251260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.251364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.251380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.251605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.251622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.251721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.251737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.251914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.251930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.252061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.252205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.252394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.252503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.252611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.252813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.252989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.253809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.036 [2024-06-10 12:18:05.253825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.036 qpair failed and we were unable to recover it. 00:29:16.036 [2024-06-10 12:18:05.254057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.254971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.254988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.255164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.255180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.255360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.255376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.255469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.255492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.255602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.255618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.255774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.255790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.255881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.255898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.256972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.256988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.257079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.257098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.257189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.257205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.257321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.257338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.257443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.257460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.257714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.257730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.257885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.257901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.258085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.258102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.258192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.258209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.258309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.258325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.258547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.258564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.258711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.258727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.258830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.258846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.259004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.259021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.259122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.259138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.259231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.259247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.259341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.259357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.259542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.259559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.037 qpair failed and we were unable to recover it. 00:29:16.037 [2024-06-10 12:18:05.259657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.037 [2024-06-10 12:18:05.259673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.259849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.259865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.259978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.259995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.260095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.260112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.260215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.260231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.260390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.260407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.260566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.260582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.260699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.260716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.260828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.260844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.261973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.261990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.262149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.262165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.262254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.262270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.262429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.262448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.262526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.262543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.262718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.262736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.262909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.262926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.263829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.263845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.264947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.264963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.265057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.265073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.038 [2024-06-10 12:18:05.265174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.038 [2024-06-10 12:18:05.265190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.038 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.265453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.265470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.265631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.265648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.265749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.265765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.265857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.265874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.265986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.266875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.266893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.267966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.267983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.268938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.268954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.269884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.269901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.270086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.270208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.270305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.270416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.270523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.039 [2024-06-10 12:18:05.270721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.039 qpair failed and we were unable to recover it. 00:29:16.039 [2024-06-10 12:18:05.270902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.270919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.271022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.271038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.271198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.271214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.271401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.271418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.271577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.271594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.271696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.271712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.271869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.271885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.272887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.272906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.273130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.273146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.273394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.273410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.273573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.273590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.273769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.273785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.274970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.274986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.275099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.275115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.275288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.275305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.275459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.275480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.275580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.275596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.275748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.275766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.275898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.275916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.276105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.040 [2024-06-10 12:18:05.276122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.040 qpair failed and we were unable to recover it. 00:29:16.040 [2024-06-10 12:18:05.276282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.276298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.276457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.276474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.276726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.276742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.276852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.276869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.277026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.277043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.277333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.277350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.277520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.277537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.277664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.277683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.277800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.277816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.278088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.278104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.278309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.278325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.278445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.278461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.278656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.278673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.278830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.278847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.279001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.279018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.279113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.279129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.279420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.279437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.279527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.279544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.279705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.279722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.280919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.280935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.281165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.281182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.281350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.281367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.281487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.281503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.281663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.281680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.281802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.281819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.282962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.282979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.041 [2024-06-10 12:18:05.283071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.041 [2024-06-10 12:18:05.283087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.041 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.283247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.283264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.283430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.283446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.283628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.283645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.283907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.283924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.284037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.284054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.284143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.284159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.284330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.284346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.284521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.284538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.284695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.284711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.284801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.284822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.285982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.285999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.286170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.286187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.286277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.286293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.286480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.286496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.286676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.286693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.286804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.286820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.287878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.287995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.288012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.288175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.288191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.288369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.288385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.288507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.288525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.288636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.288653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.288878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.288894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.288986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.289006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.289105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.289122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.042 [2024-06-10 12:18:05.289230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.042 [2024-06-10 12:18:05.289246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.042 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.289411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.289427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.289528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.289544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.289709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.289726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.289935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.289952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.290178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.290194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.290342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.290359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.290614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.290631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.290791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.290807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.290933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.290949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.291947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.291964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.292859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.292875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.293903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.293919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.294081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.294098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.294191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.294207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.294306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.294322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.294484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.294501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.294607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.294624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.294786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.294803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.295048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.295066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.295227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.295243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.043 qpair failed and we were unable to recover it. 00:29:16.043 [2024-06-10 12:18:05.295344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.043 [2024-06-10 12:18:05.295361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.295459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.295481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.295585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.295602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.295775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.295791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.295904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.295921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.044 [2024-06-10 12:18:05.296496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.296946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.296962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.297120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.297136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.297374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.297391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.297505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.297522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.297703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.297720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.297819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.297835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.297997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.298965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.298981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.299945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.299961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.300056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.300072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.300272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.300289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.300417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.300434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.300543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.300560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.300681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.300698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.300962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.300980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.044 [2024-06-10 12:18:05.301145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.044 [2024-06-10 12:18:05.301162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.044 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.301324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.301340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.301443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.301460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.301589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.301607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.301765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.301786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.301980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.301998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.302150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.302168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.302264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.302280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.302374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.302391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.302562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.302580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.302757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.302774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.302879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.302895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.303958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.303975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.304065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.304082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.304180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.304197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.304378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.304394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.304524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.304541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.304652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.304670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.304830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.304848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.305898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.305916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.306084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.045 [2024-06-10 12:18:05.306102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.045 qpair failed and we were unable to recover it. 00:29:16.045 [2024-06-10 12:18:05.306209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.306325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.306432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.306548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.306721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.306843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.306948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.306964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.307081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.307098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.307193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.307210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.307436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.307452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.307567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.307585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.307747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.307764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.307855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.307872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.308982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.308998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.309098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.309115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.309301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.309318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.309484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.309505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.309633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.309651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.309825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.309842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.309955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.309971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.310154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.310170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.310395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.310411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.310503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.310519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.310628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.310645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.310753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.310770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.046 [2024-06-10 12:18:05.311971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.046 [2024-06-10 12:18:05.311988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.046 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.312865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.312881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.313966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.313983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.314144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.314161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.314328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.314345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.314589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.314607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.314682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.314699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.314799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.314816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.314993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.315166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.315382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.315576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.315675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.315790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.315966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.315982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.316101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.316117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.316224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.316241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.316414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.316430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.316592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.316609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.316772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.316788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.316896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.316913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.317012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.317028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.317198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.317215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.317318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.317334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.317569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.317585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.047 [2024-06-10 12:18:05.317765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.047 [2024-06-10 12:18:05.317783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.047 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.317888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.317904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.317988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.318005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.318164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.318181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.318348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.318365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.318576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.318593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.318748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.318765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.318932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.318948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.319907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.319923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.320083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.320099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.320266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.320282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.320444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.320460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.320678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.320699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.320794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.320811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.320963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.320980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.321961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.321977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.322153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.322170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.322261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.322278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.322504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.322521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.322641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.322658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.322896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.322913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.323094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.323110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.323203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.323220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.323383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.323399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.323501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.323518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.048 [2024-06-10 12:18:05.323625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.048 [2024-06-10 12:18:05.323642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.048 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.323745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.323764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.323855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.323871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.324050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.324066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.324224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.324242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.324329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.324346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.324465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.324485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.324651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.324667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.324829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.324846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.325974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.325991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.326183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.326200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.326292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.326308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.326430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.326455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.326554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.326572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.326800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.326817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.326944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.326961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.327951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.327967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.328065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.328082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.328165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.328182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.328252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.328270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.328376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.328392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.328558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.328574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.049 qpair failed and we were unable to recover it. 00:29:16.049 [2024-06-10 12:18:05.328826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.049 [2024-06-10 12:18:05.328843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.328960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.328977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.329960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.329976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.330072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.330088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.330308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.330325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.330480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.330497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.330620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.330636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.330734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.330750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.330860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.330876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.331891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.331908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.332085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.332101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.332250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.332266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.332384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.332400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.332559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.332577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.332683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.332699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.332802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.332818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.333021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.333038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.333262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.333282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.333410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.333428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.333535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.333553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.333646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.333663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.333859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.333877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.334034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.334056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.334215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.334233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.334395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.334412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.334614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.334633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.050 [2024-06-10 12:18:05.334793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.050 [2024-06-10 12:18:05.334811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.050 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.335906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.335923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.336099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.336116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.336211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.336227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.336341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.336366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.336547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.336565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.336666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.336682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.336840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.336856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.337843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.337860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.338044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.338061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.338175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.338192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.338368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.338389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.338552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.338570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.338753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.338770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.338883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.338901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.339926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.339944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.340026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.340043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.340296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.340315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.340423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.340440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.340552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.340569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.340797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.051 [2024-06-10 12:18:05.340815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.051 qpair failed and we were unable to recover it. 00:29:16.051 [2024-06-10 12:18:05.341055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.341073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.341173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.341191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.341295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.341311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.341410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.341426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.341586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.341604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.341729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.341746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.341995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.342013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.342106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.342124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.342411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.342429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.342667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.342685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.342944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.342961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.343065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.343082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.343258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.343274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.343548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.343564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.343751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.343767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.343936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.343952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.344183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.344199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.344368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.344385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.344555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.344571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.344665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.344681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.344782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.344799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.344957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.344973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.345134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.345150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.345259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.345275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.345449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.345468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.345563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.345581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.345692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.345710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.345936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.345953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.346134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.346150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.346311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.346327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.346426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.346443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.346661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.346678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.346768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.346784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.346912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.346928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.347044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.347060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.347169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.347186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.347432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.347448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.347551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.347568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.347679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.052 [2024-06-10 12:18:05.347695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.052 qpair failed and we were unable to recover it. 00:29:16.052 [2024-06-10 12:18:05.347868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.347883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.347982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.347998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.348950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.348967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.349168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.349184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.349361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.349378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.349452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.349468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.349588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.349620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.349744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.349766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.349881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.349898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.350110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.350126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.350287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.350303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.350464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.350486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.350595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.350612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.350686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.350702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.350881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.350897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.351056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.351072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.351175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.351191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.351303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.351319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.351544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.351560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.351651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.351668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.351766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.351783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.352946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.352962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.353087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.353103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.353216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.353232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.353351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.353367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.353548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.353564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.353724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.053 [2024-06-10 12:18:05.353741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.053 qpair failed and we were unable to recover it. 00:29:16.053 [2024-06-10 12:18:05.353843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.353862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.353952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.353969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.354802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.354818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.355948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.355964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.356067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.356084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.356256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.356272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.356517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.356533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.356630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.356646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.356808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.356824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.356982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.356999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.357274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.357291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.357398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.357414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.357586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.357603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.357763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.357780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.357870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.357886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.357990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.358007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.358097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.358114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.358218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.054 [2024-06-10 12:18:05.358234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.054 qpair failed and we were unable to recover it. 00:29:16.054 [2024-06-10 12:18:05.358333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.358349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.358598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.358615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.358797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.358813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.358953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.358969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.359077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.359093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.359250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.359267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.359422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.359438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.359525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.359542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.359712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.359729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.359864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.359880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.360058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.360075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.360163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.360179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.360355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.360372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.360547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.360564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.360721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.360738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.360898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.360914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.361075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.361091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.361322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.361338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.361499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.361516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.361629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.361645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.361731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.361747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.361938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.361954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.362972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.362988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.363927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.055 [2024-06-10 12:18:05.363943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.055 qpair failed and we were unable to recover it. 00:29:16.055 [2024-06-10 12:18:05.364123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.364139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.364299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.364318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.364487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.364504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.364599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.364616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.364776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.364793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.364917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.364933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.365030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.365046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.365134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.365150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.365334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.365350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.365508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.365525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.365606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.365623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.365914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.365932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.366895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.366911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.367050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.367224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.367341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.367463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.367708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.367830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.367995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.368124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.368318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.368568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.368685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.368784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.368903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.368920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.369079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.369097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.369191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.369209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.369367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.369385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.369480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.369497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.369596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.369613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.056 qpair failed and we were unable to recover it. 00:29:16.056 [2024-06-10 12:18:05.369689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.056 [2024-06-10 12:18:05.369720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.056 [2024-06-10 12:18:05.369731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.056 [2024-06-10 12:18:05.369740] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.056 [2024-06-10 12:18:05.369747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.056 [2024-06-10 12:18:05.369781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.056 [2024-06-10 12:18:05.369798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.369870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:16.057 [2024-06-10 12:18:05.370020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.369978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:16.057 [2024-06-10 12:18:05.370087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:16.057 [2024-06-10 12:18:05.370167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.370088] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:16.057 [2024-06-10 12:18:05.370350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.370446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.370633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.370735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.370938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.370956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.371113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.371129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.371305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.371322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.371573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.371592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.371753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.371770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.372046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.372063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.372233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.372250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.372523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.372541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.372648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.372665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.372760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.372777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.372870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.372887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.373982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.373999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.374101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.374117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.374275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.374291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.374525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.374543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.374682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.374698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.374894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.374919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.375088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.375106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.375228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.375244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.375348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.375364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.057 [2024-06-10 12:18:05.375467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.057 [2024-06-10 12:18:05.375504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.057 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.375664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.375680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.375848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.375866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.375983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.376001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.376094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.376112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.376285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.376302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.376511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.376529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.376620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.376636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.376746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.376764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.376991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.377149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.377274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.377451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.377637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.377801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.377911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.377927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.378022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.378038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.378196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.378213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.378457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.378483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.378698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.378715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.378837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.378854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.379126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.379144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.379333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.379350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.379518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.379536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.379699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.379717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.379943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.379961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.380232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.380250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.380516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.380535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.380674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.380691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.380811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.380829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.381033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.381052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.381304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.381322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.381507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.381526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.058 qpair failed and we were unable to recover it. 00:29:16.058 [2024-06-10 12:18:05.381708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.058 [2024-06-10 12:18:05.381725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.381895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.381918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.382166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.382184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.382347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.382365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.382553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.382570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.382677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.382694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.382854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.382872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.383129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.383148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.383241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.383258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.383372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.383390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.383521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.383539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.383722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.383738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.383867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.383885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.384067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.384084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.384288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.384306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.384466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.384492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.384648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.384665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.384869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.384887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.385053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.385070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.385307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.385324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.385414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.385430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.385513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.385530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.385640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.385658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.385765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.385782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.386892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.386909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.387022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.387039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.387136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.387154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.059 [2024-06-10 12:18:05.387317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.059 [2024-06-10 12:18:05.387334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.059 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.387517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.387534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.387694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.387712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.387879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.387896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.388936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.388956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.389114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.389132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.389309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.389327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.389487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.389507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.389670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.389687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.389780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.389796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.389908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.389925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.390974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.390992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.391841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.391857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.392061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.392186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.392304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.392466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.392595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.060 [2024-06-10 12:18:05.392767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.060 qpair failed and we were unable to recover it. 00:29:16.060 [2024-06-10 12:18:05.392859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.392876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.392992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.393281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.393399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.393577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.393695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.393843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.393939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.393956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.394050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.394068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.394163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.394180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.394430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.394448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.394717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.394739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.394851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.394868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.394972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.394989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.395149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.395166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.395284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.395301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.395400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.395417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.395590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.395608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.395768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.395791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.395881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.395898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.396978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.396995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.061 qpair failed and we were unable to recover it. 00:29:16.061 [2024-06-10 12:18:05.397922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.061 [2024-06-10 12:18:05.397939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.398892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.398908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.399025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.399041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.399268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.399284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.399442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.399459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.399624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.399642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.399749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.399765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.399929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.399945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.400101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.400118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.400277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.400294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.400387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.400404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.400565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.400585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.400809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.400825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.400928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.400944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.401054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.401071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.401240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.401256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.401490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.401507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.401610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.401626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.401747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.401763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.401930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.401947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.402038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.402054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.402144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.402160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.402408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.402439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.402555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.402572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.402738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.402754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.402888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.402904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.403930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.062 [2024-06-10 12:18:05.403947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.062 qpair failed and we were unable to recover it. 00:29:16.062 [2024-06-10 12:18:05.404063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.404079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.404312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.404328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.404491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.404508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.404711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.404728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.404910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.404927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.405933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.405950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.406068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.406085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.406180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.406196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.406303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.406320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.406496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.406513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.406694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.406710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.406906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.406923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.407079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.407095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.407206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.407222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.407332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.407349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.407547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.407563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.407719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.407735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.407892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.407911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.408086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.408102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.408210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.408226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.408385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.408402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.408520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.408548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.408789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.408808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.409036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.409053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.409139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.409155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.063 qpair failed and we were unable to recover it. 00:29:16.063 [2024-06-10 12:18:05.409243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.063 [2024-06-10 12:18:05.409259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.409406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.409426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.409517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.409533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.409753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.409770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.409970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.409987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.410256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.410272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.410385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.410401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.410605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.410622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.410774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.410791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.410919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.410936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.411121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.411137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.411308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.411324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.411487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.411504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.411671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.411688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.411852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.411868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.412097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.412113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.412298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.412315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.412497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.412514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.412747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.412763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.412939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.412955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.413274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.413291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.413532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.413548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.413778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.413794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.414016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.414032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.414321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.414337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.414579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.414596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.414767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.414784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.414899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.414915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.415140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.415157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.415408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.415424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.415605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.415621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.415860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.415877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.416116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.416133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.416354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.416371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.416600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.416617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.416790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.416806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.417075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.417092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.417267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.417285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.064 [2024-06-10 12:18:05.417518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.064 [2024-06-10 12:18:05.417535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.064 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.417773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.417790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.418036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.418054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.418177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.418196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.418421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.418438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.418687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.418705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.418877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.418893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.419160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.419177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.419302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.419319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.419436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.419452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.419583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.419600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.419765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.419782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.419939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.419955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.420183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.420200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.420408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.420425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.420594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.420611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.420825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.420841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.421110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.421128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.421359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.421377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.421499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.421517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.421646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.421664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.421905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.421922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.422097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.422114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.422337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.422355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.422496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.422515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.422763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.422780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.423005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.423023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.423192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.423209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.423456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.423474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.423718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.423735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.423964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.423988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.424229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.424257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.424436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.424454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.424708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.424728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.424960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.424976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.425185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.425202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.425447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.425463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.425667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.425683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.425847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.065 [2024-06-10 12:18:05.425864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.065 qpair failed and we were unable to recover it. 00:29:16.065 [2024-06-10 12:18:05.425989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.426006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.426235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.426252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.426411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.426427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.426619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.426637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.426792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.426808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.426992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.427008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.427186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.427202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.427425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.427441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.427687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.427705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.427862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.427878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.428147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.428164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.428419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.428435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.428663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.428679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.428879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.428895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.429187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.429204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.429377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.429393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.429631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.429649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.429781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.429798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.429929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.429948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.430186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.430202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.430429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.430445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.430713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.430729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.430901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.430917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.431109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.431126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.431350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.431366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.431568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.431585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.431775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.431791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.431973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.431990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.432161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.432177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.432370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.432387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.432556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.432573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.432796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.432813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.433061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.433078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.433304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.433320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.433543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.433560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.433693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.433710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.433917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.433934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.434138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.434154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.066 [2024-06-10 12:18:05.434380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.066 [2024-06-10 12:18:05.434396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.066 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.434626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.434643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.434754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.434770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.434895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.434912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.435104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.435121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.435345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.435361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.435521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.435537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.435767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.435786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.435948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.435964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.436095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.436112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.436215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.436231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.436400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.436416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.436718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.436735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.436959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.436975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.437099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.437115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.437376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.437392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.437590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.437607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.437779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.437795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.437938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.437955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.438061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.438077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.438360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.438377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.438550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.438567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.438737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.438753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.438999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.439015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.439131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.439148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.439312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.439329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.439593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.439609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.439777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.439794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.440115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.440131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.440366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.440382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.440585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.440601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.440718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.440734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.440878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.441056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.441072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.441254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.441271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.441429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.441446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.441615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.441632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.441875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.441891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.442068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.442084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.442264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.067 [2024-06-10 12:18:05.442282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.067 qpair failed and we were unable to recover it. 00:29:16.067 [2024-06-10 12:18:05.442386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.442403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.442630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.442647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.442767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.442783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.443035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.443051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.443294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.443310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.443466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.443487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.443626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.443643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.443774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.443790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.443919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.443938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.444068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.444084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.444251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.444268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.444437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.444453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.444627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.444644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.444771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.444787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.445978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.445996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.446301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.446318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.446564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.446580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.446705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.446721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.446890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.446906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.447154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.447170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.447343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.447359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.447607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.447624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.447806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.447822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.447931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.447947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.448071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.448087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.448379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.448395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.448649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.068 [2024-06-10 12:18:05.448666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.068 qpair failed and we were unable to recover it. 00:29:16.068 [2024-06-10 12:18:05.448773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.448790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.448902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.448919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.449120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.449137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.449402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.449419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.449675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.449692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.449806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.449823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.449950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.449966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.450192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.450208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.450325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.450341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.450606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.450623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.450790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.450807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.451121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.451138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.451328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.451344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.451516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.451532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.451656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.451672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.451849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.451866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.452041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.452066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.452297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.452314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.452498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.452514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.452660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.452676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.452927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.452943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.453121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.453137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.453320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.453340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.453518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.453535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.453704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.453720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.453944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.453959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.454254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.454270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.454438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.454454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.454650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.454666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.454833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.454854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.454980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.454996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.455194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.455211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.455435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.455452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.455691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.455708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.455818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.455834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.456061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.456077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.456308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.456324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.456553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.456570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.456743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.456759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.457004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.457020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.457195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.069 [2024-06-10 12:18:05.457211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.069 qpair failed and we were unable to recover it. 00:29:16.069 [2024-06-10 12:18:05.457462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.457484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.457612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.457628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.457789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.457805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.457919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.457936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.458100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.458116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.458381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.458398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.458621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.458638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.458879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.458895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.459016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.459033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.459201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.459218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.459387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.459403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.459627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.459644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.459814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.459831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.459938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.459954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.460198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.460214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.460391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.460408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.460604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.460620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.460788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.460804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.460984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.461001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.461201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.461217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.461386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.461402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.461582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.461598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.461771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.461787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.462030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.462047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.462239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.462255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.462550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.462567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.462764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.462781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.462957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.462974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.463265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.463281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.463536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.463553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.463727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.463744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.463915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.463932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.464041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.464057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.464289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.464306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.464551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.464567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.464696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.464712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.464950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.464966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.465173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.465189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.465361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.465377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.465649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.465666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.465889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.465905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.466080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.070 [2024-06-10 12:18:05.466096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.070 qpair failed and we were unable to recover it. 00:29:16.070 [2024-06-10 12:18:05.466351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.466367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.466620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.466637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.466813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.466829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.467026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.467042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.467209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.467225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.467329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.467345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.467627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.467644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.467804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.467820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.467995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.468011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.468243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.468260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.468442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.468458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.468671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.468690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.468875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.468892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.469066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.469084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.469316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.469333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.469592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.469610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.469787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.469803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.469979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.469995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.470171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.470187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.470351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.470367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.470528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.470546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.470812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.470828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.470941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.470957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.471233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.471249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.471420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.471437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.471630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.471647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.471860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.471876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.472062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.472078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.472245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.472261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.472493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.472510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.472690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.472707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.472950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.472967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.473215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.473231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.473398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.473414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.473616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.473634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.473857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.473874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.473991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.474007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.474183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.474199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.474317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.474333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.474427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.474443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.474630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.474652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.474830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.071 [2024-06-10 12:18:05.474846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.071 qpair failed and we were unable to recover it. 00:29:16.071 [2024-06-10 12:18:05.475026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.475042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.475289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.475305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.475471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.475493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.475733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.475750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.475974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.475990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.476230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.476246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.476351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.476367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.476606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.476623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.476795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.476812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.476933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.476949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.477115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.477132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.477235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.477252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.477505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.477522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.477628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.477644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.477898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.477915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.478134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.478150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.478407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.478423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.478605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.478621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.478857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.478873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.479044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.479060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.479312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.479328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.479569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.479585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.479756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.479772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.479937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.479953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.480145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.480162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.480405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.480421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.480593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.480610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.480783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.480799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.481051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.481067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.481339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.481355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.481624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.481641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.481913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.481929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.482051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.482067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.482397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.482413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.482670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.482686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.482845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.482861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.483110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.483126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.483323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.483339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.483564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.072 [2024-06-10 12:18:05.483580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.072 qpair failed and we were unable to recover it. 00:29:16.072 [2024-06-10 12:18:05.483832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.483851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.484106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.484122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.484345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.484362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.484625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.484642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.484745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.484761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.485005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.485021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.485222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.485238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.485506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.485522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.485742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.485759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.485916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.485932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.486186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.486202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.486425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.486441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.486705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.486722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.486889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.486905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.487080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.487096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.487347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.487363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.487521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.487538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.487711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.487727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.487918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.487934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.488143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.488160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.488431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.488447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.488696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.488713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.488959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.488975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.489230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.489246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.489428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.489445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.489726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.489743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.073 qpair failed and we were unable to recover it. 00:29:16.073 [2024-06-10 12:18:05.489992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.073 [2024-06-10 12:18:05.490008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.490243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.490262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.490507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.490524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.490628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.490644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.490774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.490791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.490920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.490937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.491103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.491119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.491304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.491321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.491423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.491439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.491701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.491717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.491943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.491960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.492236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.492252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.492421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.492437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.492714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.492731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.492845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.492861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.493068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.493086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.493192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.493209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.493486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.493502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.493681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.493697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.493871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.493887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.494005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.494021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.494282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.494298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.494550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.494567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.494802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.494818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.495013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.495029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.495206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.495222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.495379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.495395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.495640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.495657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.495840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.495859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.495978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.495994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.496103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.496119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.496363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.496379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.496534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.496551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.496800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.496816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.496989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.497006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.497118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.497134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.074 [2024-06-10 12:18:05.497317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.074 [2024-06-10 12:18:05.497333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.074 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.497533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.497549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.497737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.497753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.497911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.497927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.498105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.498121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.498368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.498384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.498601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.498618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.498722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.498738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.498986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.499002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.499306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.499322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.499566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.499583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.499772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.499788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.500031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.500047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.500236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.500252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.500434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.500450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.500695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.500711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.500902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.500919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.501172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.501188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.501418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.501434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.501607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.501624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.501779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.501795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.501990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.502007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.502246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.502263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.502501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.502518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.502753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.502769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.502961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.502977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.503179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.503196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.503460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.503480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.503642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.503659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.503882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.503898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.504114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.504130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.504329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.504346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.504530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.504549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.504729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.504745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.505010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.505027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.505274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.505290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.075 [2024-06-10 12:18:05.505526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.075 [2024-06-10 12:18:05.505542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.075 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.505669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.505686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.505898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.505914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.506043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.506059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.506248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.506265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.506433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.506450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.506631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.506648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.506895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.506911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.507215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.507232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.507485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.507502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.507681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.507697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.507864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.507880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.508145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.508162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.508327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.508344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.508566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.508583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.508762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.508778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.509004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.509020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.509177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.509193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.509362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.509378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.509643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.509659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.509909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.509925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.510098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.510114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.510292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.510309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.510602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.510626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.510842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.510858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.511118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.511135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.511309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.511325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.511504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.511520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.511645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.511662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.511893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.511910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.512160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.512177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.512335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.512352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.512543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.512559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.512808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.512824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.513072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.513089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.513258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.513274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.513505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.513524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.513712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.513728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.076 [2024-06-10 12:18:05.513922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.076 [2024-06-10 12:18:05.513938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.076 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.514120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.514136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.514382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.514399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.514580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.514596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.514768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.514784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.515018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.515035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.515222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.515237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.515458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.515475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.515724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.515740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.515914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.515930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.516120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.516136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.516408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.516424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.516655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.516672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.516892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.516908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.517082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.517098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.517345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.517362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.517609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.517625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.517846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.517863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.517988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.518003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.518255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.518272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.518444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.518461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.518739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.518756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.519000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.519016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.519207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.519223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.519462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.519483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.519747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.519765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.519896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.519913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.520027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.520043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.520265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.520281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.520449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.520465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.520697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.520713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.077 [2024-06-10 12:18:05.520872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.077 [2024-06-10 12:18:05.520888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.077 qpair failed and we were unable to recover it. 00:29:16.078 [2024-06-10 12:18:05.521125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.078 [2024-06-10 12:18:05.521142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.078 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.521325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.521342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.521552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.521569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.521684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.521701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.521873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.521889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.522133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.522151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.522341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.522358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.522606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.522623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.522802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.522818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.523116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.523133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.523379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.523395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.523655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.523672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.523908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.523925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.524045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.524061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.524224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.524240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.524467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.524497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.524681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.524698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.524920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.524936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.525207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.525224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.525467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.525496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.525616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.525633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.525794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.525810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.526032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.526048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.526216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.526231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.526451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.526467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.526717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.526734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.526937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.526953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.527124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.527141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.527313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.527329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.527575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.527592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.527787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.527803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.528026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.528042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.528234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.528251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.528499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.528518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.528622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.528639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.528892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.528908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.529075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.529091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.529316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.363 [2024-06-10 12:18:05.529332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.363 qpair failed and we were unable to recover it. 00:29:16.363 [2024-06-10 12:18:05.529601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.529618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.529854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.529871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.530093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.530110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.530359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.530376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.530626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.530643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.530756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.530772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.530998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.531014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.531191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.531207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.531424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.531440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.531639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.531655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.531901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.531918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.532164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.532181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.532290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.532306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.532541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.532558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.532749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.532766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.533080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.533097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.533281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.533373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.533560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.533577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.533767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.533783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.533887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.533903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.534152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.534168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.534434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.534450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.534681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.534698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.534867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.534884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.535072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.535089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.535336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.535353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.535525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.535541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.535714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.535730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.535897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.535914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.536186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.536203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.536411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.536427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.536654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.536671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.536852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.536867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.537116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.537132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.537257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.537273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.537531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.537550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.537774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.537790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.538020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.538036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.364 qpair failed and we were unable to recover it. 00:29:16.364 [2024-06-10 12:18:05.538306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.364 [2024-06-10 12:18:05.538322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.538500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.538517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.538743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.538759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.538872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.538888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.539001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.539018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.539200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.539216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.539461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.539480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.539673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.539689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.539968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.539984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.540261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.540277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.540517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.540534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.540825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.540842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.541067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.541083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.541318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.541334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.541610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.541627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.541849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.541865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.542105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.542121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.542377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.542394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.542586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.542603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.545690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.545708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.545958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.545974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.546151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.546167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.546389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.546405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.546672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.546689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.546932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.546949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.547142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.547158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.547345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.547362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.547545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.547560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.547737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.547753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.547883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.547900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.548005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.548021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.548275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.548291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.548524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.548541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.548762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.548779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.548973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.548989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.549272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.549288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.549443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.549459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.549653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.549685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.365 qpair failed and we were unable to recover it. 00:29:16.365 [2024-06-10 12:18:05.549930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.365 [2024-06-10 12:18:05.549947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.550177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.550193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.550464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.550485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.550644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.550661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.550855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.550871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.551095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.551111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.551376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.551393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.551606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.551623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.551818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.551834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.552015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.552031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.552225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.552241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.552393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.552409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.552652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.552669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.552864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.552880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.553118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.553135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.553326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.553342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.553589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.553606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.553790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.553805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.554057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.554073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.554336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.554352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.554525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.554550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.554729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.554745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.554920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.554936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.555098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.555114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.555399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.555415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.555695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.555712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.555935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.555953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.556175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.556192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.556418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.556434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.556611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.556627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.556796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.556813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.556993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.557009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.557232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.557248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.557502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.557519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.557642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.557658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.557910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.557926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.558087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.558104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.558284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.558300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.558420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.558436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.366 qpair failed and we were unable to recover it. 00:29:16.366 [2024-06-10 12:18:05.558610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.366 [2024-06-10 12:18:05.558626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.558821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.558856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.559060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.559078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.559333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.559349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.559610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.559626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.559872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.559888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.560066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.560083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.560207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.560223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.560502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.560518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.560755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.560771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.561018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.561034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.561303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.561319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.561544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.561561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.561718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.561734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.561907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.561925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.562120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.562136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.562380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.562396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.562609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.562626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.562849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.562865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.563108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.563124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.563377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.563393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.563500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.563517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.563731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.563747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.563917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.563933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.564199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.564215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.564405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.564421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.564644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.564661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.564911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.564927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.565114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.565130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.565366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.565382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.367 qpair failed and we were unable to recover it. 00:29:16.367 [2024-06-10 12:18:05.565557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.367 [2024-06-10 12:18:05.565574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.565686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.565702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.565946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.565963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.566087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.566103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.566340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.566356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.566579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.566595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.566824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.566841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.567063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.567078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.567354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.567370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.567538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.567554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.567781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.567798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.567915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.567936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.568048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.568065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.568250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.568267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.568445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.568461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.568643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.568659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.568891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.568907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.569131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.569147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.569323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.569339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.569508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.569525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.569631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.569647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.569757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.569773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.569952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.569968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.570159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.570175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.570418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.570439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.570615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.570632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.570874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.570889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.571067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.571084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.571328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.571344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.571543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.571560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.571763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.571780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.571901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.571917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.572167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.572183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.572342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.572359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.572536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.572553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.572713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.572730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.572903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.572919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.573149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.573165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.573339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.368 [2024-06-10 12:18:05.573355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.368 qpair failed and we were unable to recover it. 00:29:16.368 [2024-06-10 12:18:05.573521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.573538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.573734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.573751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.573906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.573922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.574086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.574103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.574310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.574326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.574581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.574598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.574760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.574776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.574945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.574961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.575219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.575235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.575508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.575525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.575753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.575769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.575889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.575905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.576041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.576058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.576277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.576293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.576414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.576430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.576601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.576618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.576750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.576765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.576998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.577015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.577211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.577228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.577411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.577427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.577586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.577603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.577780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.577797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.577984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.578000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.578255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.578272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.578440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.578457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.578585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.578607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.578785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.578802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.579018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.579035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.579289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.579306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.579474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.579496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.579666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.579683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.579804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.579821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.580011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.580027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.580246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.580262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.580486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.580503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.580740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.580757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.580915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.580932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.581034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.581051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.369 [2024-06-10 12:18:05.581218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.369 [2024-06-10 12:18:05.581236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.369 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.581398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.581415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.581623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.581640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.581882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.581898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.582009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.582026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.582248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.582265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.582437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.582453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.582646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.582662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.582871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.582887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.583001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.583017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.583294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.583310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.583471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.583492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.583604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.583620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.583811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.583827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.584066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.584085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.584265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.584281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.584533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.584552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.584737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.584754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.584942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.584958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.585082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.585099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.585346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.585363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.585609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.585627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.585815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.585832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.586018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.586034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.586212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.586228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.586402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.586418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.586657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.586674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.586847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.586863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.587116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.587133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.587329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.587346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.587539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.587557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.587806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.587823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.587988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.588004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.588185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.588201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.588360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.588377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.588545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.588563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.588834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.588850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.588964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.588981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.589251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.589267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.370 [2024-06-10 12:18:05.589513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.370 [2024-06-10 12:18:05.589531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.370 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.589663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.589680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.589873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.589892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.590187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.590203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.590374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.590391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.590584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.590602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.590768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.590784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.590956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.590972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.591178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.591195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.591422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.591438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.591616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.591633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.591758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.591774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.591980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.591997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.592201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.592217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.592439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.592456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.592633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.592650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.592757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.592773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.593000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.593016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.593268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.593284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.593537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.593555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.593722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.593738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.593930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.593946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.594201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.594217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.594396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.594412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.594577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.594594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.594772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.594789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.595012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.595028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.595296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.595312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.595556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.595573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.595750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.595766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.595930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.595946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.596051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.596068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.596319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.596335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.596557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.596574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.596703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.596720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.596946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.596963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.597224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.597240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.597512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.597529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.597777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.597793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 12:18:05.597965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.371 [2024-06-10 12:18:05.597981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.598273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.598289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.598451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.598467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.598665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.598681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.598857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.598879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.599052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.599068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.599306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.599322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.599482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.599500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.599690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.599707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.599814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.599831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.600015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.600032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.600285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.600301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.600482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.600499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.600696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.600712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.600887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.600904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.601190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.601206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.601431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.601447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.601705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.601722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.601820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.601836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.602059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.602075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.602332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.602349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.602543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.602560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.602753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.602770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.602938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.602954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.603118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.603135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.603307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.603324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.603570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.603587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.603782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.603800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.603974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.603990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.604220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.604236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.604349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.604365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.604588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.604608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.604782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.372 [2024-06-10 12:18:05.604798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 12:18:05.604988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.605004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.605175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.605191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.605407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.605423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.605589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.605605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.605733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.605750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.605872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.605888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.606078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.606094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.606342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.606359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.606610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.606627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.606874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.606890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.607020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.607036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.607238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.607254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.607416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.607437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.607651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.607670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.607898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.607915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.608038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.608054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.608267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.608284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.608486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.608504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.608769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.608786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.608912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.608929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.609054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.609070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.609267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.609285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.609446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.609464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.609642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.609659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.609834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.609851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.610015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.610035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.610201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.610218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.610427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.610443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.610688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.610705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.610903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.610921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.611148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.611164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.611454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.611471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.611640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.611656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.611776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.611793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.611998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.612014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.612203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.612220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.612320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.612336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.612603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.612620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 12:18:05.612846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.373 [2024-06-10 12:18:05.612862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.613109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.613126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.613380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.613397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.613590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.613608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.613785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.613801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.613980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.613997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.614215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.614232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.614459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.614480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.614652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.614670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.614920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.614936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.615123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.615139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.615419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.615435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.615693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.615711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.615830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.615846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.616026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.616043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.616253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.616270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.616513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.616530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.616693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.616710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.616890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.616906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.617029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.617045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.617295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.617311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.617557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.617579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.617756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.617773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.617942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.617958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.618220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.618236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.618494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.618512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.618688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.618704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.618966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.618985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.619102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.619119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.619320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.619337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.619495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.619512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.619637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.619653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.619775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.619791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.620039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.620055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.620346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.620363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.620560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.620577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.620735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.620751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.620855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.620871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.621014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.621030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.374 qpair failed and we were unable to recover it. 00:29:16.374 [2024-06-10 12:18:05.621153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.374 [2024-06-10 12:18:05.621170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.621340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.621356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.621526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.621543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.621769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.621785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.621982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.621998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.622212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.622229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.622396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.622413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.622662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.622679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.622854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.622872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.623002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.623019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.623301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.623318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.623542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.623561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.623738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.623755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.623920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.623936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.624127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.624144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.624317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.624344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.624461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.624490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.624674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.624692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.624809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.624825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.625047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.625063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.625249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.625266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.625442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.625459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.625685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.625702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.625925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.625942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.626152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.626168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.626359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.626375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.626545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.626562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.626681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.626699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.626886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.626906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.627137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.627154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.627346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.627363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.627564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.627664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.627682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.627869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.627885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.628058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.628074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.628251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.628268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.628531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.628547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.628723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.628739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.628901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.375 [2024-06-10 12:18:05.628919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.375 qpair failed and we were unable to recover it. 00:29:16.375 [2024-06-10 12:18:05.629047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.629063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.629330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.629347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.629636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.629654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.629847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.629863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.630143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.630160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.630417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.630434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.630663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.630680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.630856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.630872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.631048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.631065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.631182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.631197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.631367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.631383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.631613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.631631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.631756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.631773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.631938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.631954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.632083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.632100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.632213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.632230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.632416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.632437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.632690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.632707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.632901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.632917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.633039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.633055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.633328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.633345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.633571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.633587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.633746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.633763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.633940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.633957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.634139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.634155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.634326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.634342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.634598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.634615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.634785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.634801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.634918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.634935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.635119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.635136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.635257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.635274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.635490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.635507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.635756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.635772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.376 qpair failed and we were unable to recover it. 00:29:16.376 [2024-06-10 12:18:05.635885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.376 [2024-06-10 12:18:05.635901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.636172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.636189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.636298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.636316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.636531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.636549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.636659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.636676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.636775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.636793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.636912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.636928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.637040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.637056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.637250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.637267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.637461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.637483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.637774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.637793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.637905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.637921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.638050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.638066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.638236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.638252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.638516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.638533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.638699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.638716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.638908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.638925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.639170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.639186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.639369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.639385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.639572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.639589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.639713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.639730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.639858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.639874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.640060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.640076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.640293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.640310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.640558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.640576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.640692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.640709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.640886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.640902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.641135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.641152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.641354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.641371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.641586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.641603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.641775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.641792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.641895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.641911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.642180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.642196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.642303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.642319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.642508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.642525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.642730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.642747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.642860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.642876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.643043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.643060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.643264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.643281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.377 [2024-06-10 12:18:05.643517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.377 [2024-06-10 12:18:05.643534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.377 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.643643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.643660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.643787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.643804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.643963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.643979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.644289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.644305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.644536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.644553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.644799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.644815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.644991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.645007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.645237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.645254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.645449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.645465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.645640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.645657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.645835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.645852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.646012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.646028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.646298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.646314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.646649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.646667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.646833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.646849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.647021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.647037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.647256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.647273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.647446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.647462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.647667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.647683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.647890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.647907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.648150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.648167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.648353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.648369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.648586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.648603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.648849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.648865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.648987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.649004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.649239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.649255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.649384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.649400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.649622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.649639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.649839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.649856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.650053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.650069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.650267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.650283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.650481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.650498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.650663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.650680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.650811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.650827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.651017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.651034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.651271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.651288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.651410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.651426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.651665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.651682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.378 [2024-06-10 12:18:05.651848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.378 [2024-06-10 12:18:05.651866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.378 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.652116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.652132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.652380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.652396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.652667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.652684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.652812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.652828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.653007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.653024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.653272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.653288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.653525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.653541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.653654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.653671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.653770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.653786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.653960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.653976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.654104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.654120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.654367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.654384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.654549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.654566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.654815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.654831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.654947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.654965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.655057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.655073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.655199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.655215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.655388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.655405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.655628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.655645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.655811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.655828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.655983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.655999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.656237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.656253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.656510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.656528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.656649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.656665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.656836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.656852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.657013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.657029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.657194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.657211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.657385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.657401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.657619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.657636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.657884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.657900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.658149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.658165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.658338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.658355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.658624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.658641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.658888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.658904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.659129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.659146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.659395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.659411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.659636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.659658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.659812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.659829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.660027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.379 [2024-06-10 12:18:05.660043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.379 qpair failed and we were unable to recover it. 00:29:16.379 [2024-06-10 12:18:05.660277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.660293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.660547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.660567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.660727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.660743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.660920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.660936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.661166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.661183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.661435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.661451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.661631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.661647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.661818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.661835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.662005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.662022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.662354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.662370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.662465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.662486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.662735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.662752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.662868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.662885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.663065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.663082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.663256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.663272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.663375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.663391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.663497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.663514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.663735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.663752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.663874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.663890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.664179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.664196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.664363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.664379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.664616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.664633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.664791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.664807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.664988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.665004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.665259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.665275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.665531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.665547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.665647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.665663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.665898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.665914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.666030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.666051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.666251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.666267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.666531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.666548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.666677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.666693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.666905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.666921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.667088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.667105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.667355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.667371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.667614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.667631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.667780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.667796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.667963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.667980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.668160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.380 [2024-06-10 12:18:05.668177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.380 qpair failed and we were unable to recover it. 00:29:16.380 [2024-06-10 12:18:05.668345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.668362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.668524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.668541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.668694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.668710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.668887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.668903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.669134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.669151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.669419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.669435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.669612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.669629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.669855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.669872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.670074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.670090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.670201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.670217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.670326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.670342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.670535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.670551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.670723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.670739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.670854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.670870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.671116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.671132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.671307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.671323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.671500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.671518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.671627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.671643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.671817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.671833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.671940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.671957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.672130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.672146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.672308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.672324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.672425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.672442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.672620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.672637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.672812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.672828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.673054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.673071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.673264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.673280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.673442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.673458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.673585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.673602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.673772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.673788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.674011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.674029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.674130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.674146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.674319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.381 [2024-06-10 12:18:05.674335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.381 qpair failed and we were unable to recover it. 00:29:16.381 [2024-06-10 12:18:05.674436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.674453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.674625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.674642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.674797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.674813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.675035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.675052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.675149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.675166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.675283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.675299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.675487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.675504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.675674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.675691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.675851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.675867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.676043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.676219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.676348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.676466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.676662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.676833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.676989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.677005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.677165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.677182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.677305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.677321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.677497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.677513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.677673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.677689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.677859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.677875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.678062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.678078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.678294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.678310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.678424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.678440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.678608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.678627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.678724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.678741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.678824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.678840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.679062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.679078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.679236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.679252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.679380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.679396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.679502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.679519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.679717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.679733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.679897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.679914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.680168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.680284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.680417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.680532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.680732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.382 [2024-06-10 12:18:05.680876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.382 qpair failed and we were unable to recover it. 00:29:16.382 [2024-06-10 12:18:05.680994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.681010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.681262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.681278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.681375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.681392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.681596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.681614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.681714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.681730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.681885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.681901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.682134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.682151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.682256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.682272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.682436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.682452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.682612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.682629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.682802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.682819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.682917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.682933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.683942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.683958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.684211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.684227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.684473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.684493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.684658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.684674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.684832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.684849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.684954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.684971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.685137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.685153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.685249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.685266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.685436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.685452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.685632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.685649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.685754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.685772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.686033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.686129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.686320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.686505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.686676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.686865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.686997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.687013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.687122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.687138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.687248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.687264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.383 qpair failed and we were unable to recover it. 00:29:16.383 [2024-06-10 12:18:05.687436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.383 [2024-06-10 12:18:05.687452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.687562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.687584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.687708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.687724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.687991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.688007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.688268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.688285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.688412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.688429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.688601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.688617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.688726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.688743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.688855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.688872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.688984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.689178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.689286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.689465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.689655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.689773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.689890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.689906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.690825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.690841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.691929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.691945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.692051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.692068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.692172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.692188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.692282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.692298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.692473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.692493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.692745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.692761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.693015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.693031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.693137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.693153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.693261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.693277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.693370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.384 [2024-06-10 12:18:05.693387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.384 qpair failed and we were unable to recover it. 00:29:16.384 [2024-06-10 12:18:05.693545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.693562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.693716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.693732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.693898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.693915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.694948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.694965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.695964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.695980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.696085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.696102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.696258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.696274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.696535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.696552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.696710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.696727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.696837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.696853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.696952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.696968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.697076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.697092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.697182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.697198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.697287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.697303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.697532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.697549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.697710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.697726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.697915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.697931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.698037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.698053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.698259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.698275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.698439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.698455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.385 [2024-06-10 12:18:05.698621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.385 [2024-06-10 12:18:05.698638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.385 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.698804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.698820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.698975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.698991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.699974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.699993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.700827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.700843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.701009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.701026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.701124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.701140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.701254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.701270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.701517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.701533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.701756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.701772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.701944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.701961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.702052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.702069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.702173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.702189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.702366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.702383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.702549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.702565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.702668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.702684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.702842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.702858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.703028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.703045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.703152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.703169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.703348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.703364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.703454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.703471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.703652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.703668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.703890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.703906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.704079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.704095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.704287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.704306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.704408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.704424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.386 qpair failed and we were unable to recover it. 00:29:16.386 [2024-06-10 12:18:05.704597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.386 [2024-06-10 12:18:05.704614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.704769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.704785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.704877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.704893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.705865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.705882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.706903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.706919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.707941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.707957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.708182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.708198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.708299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.708315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.708413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.708430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.708663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.708680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.708788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.708805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.709004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.709020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.709128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.709144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.709327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.709343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.709566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.709582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.709760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.709776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.709939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.709955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.710111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.710127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.710299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.710316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.710484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.710501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.710602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.710618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.387 [2024-06-10 12:18:05.710795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-06-10 12:18:05.710812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.387 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.710928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.710944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.711066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.711335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.711454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.711556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.711665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.711841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.711999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.712015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.712270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.712287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.712444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.712461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.712576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.712592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.712702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.712718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.712888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.712905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.713086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.713102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.713326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.713342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.713486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.713503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.713597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.713613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.713707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.713724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.713880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.713896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.714933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.714949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.715109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.715128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.715286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.715302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.715549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.715566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.715671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.715687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.715855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.715871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.715959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.715975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.716136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.716152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.716316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.716333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.716505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.716522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.716620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.716637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.716725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.716740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.716966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.716982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.717084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.388 [2024-06-10 12:18:05.717100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.388 qpair failed and we were unable to recover it. 00:29:16.388 [2024-06-10 12:18:05.717194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.717209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.717322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.717338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.717518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.717535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.717626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.717642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.717812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.717828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.717938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.717955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.718977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.718993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.719924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.719941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.720110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.720125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.720292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.720308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.720481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.720498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.720659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.720675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.720839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.720855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.720958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.720974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.721192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.721208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.721367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.721385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.721492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.721509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.721683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.721699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.721929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.721945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.722175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.722191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.722414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.722430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.722589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.722605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.722704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.722720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.722894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.722910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.723135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.723151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.723323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.389 [2024-06-10 12:18:05.723339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.389 qpair failed and we were unable to recover it. 00:29:16.389 [2024-06-10 12:18:05.723440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.723456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.723565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.723581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.723773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.723790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.723900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.723916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.724165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.724181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.724438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.724455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.724698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.724715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.724824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.724840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.725050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.725067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.725287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.725303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.725549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.725565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.725736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.725753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.725923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.725939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.726115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.726131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.726323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.726339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.726531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.726548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.726689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.726708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.726885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.726902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.727061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.727077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.727205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.727221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.727442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.727458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.727626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.727645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.727804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.727820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.727961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.727978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.728235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.728251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.728421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.728437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.728609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.728626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.728748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.728765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.728870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.728886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.729114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.729130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.729380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.729396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.729649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.729665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.390 [2024-06-10 12:18:05.729832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.390 [2024-06-10 12:18:05.729848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.390 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.730072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.730088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.730315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.730331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.730593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.730611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.730781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.730797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.730969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.730985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.731224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.731241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.731416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.731432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.731604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.731621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.731797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.731814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.731981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.731997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.732195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.732214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.732363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.732379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.732571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.732588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.732761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.732778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.732905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.732921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.733089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.733105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.733350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.733366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.733619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.733635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.733743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.733759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.733875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.733891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.734069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.734085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.734242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.734258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.734458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.734475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.734653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.734670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.734848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.734864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.735037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.735054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.735303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.735320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.735527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.735543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.735803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.735819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.736001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.736017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.736128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.736144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.736364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.736380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.736592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.736609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.736858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.736874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.737048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.737064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.737237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.737253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.737431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.737448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.737721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.391 [2024-06-10 12:18:05.737738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.391 qpair failed and we were unable to recover it. 00:29:16.391 [2024-06-10 12:18:05.737976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.737993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.738149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.738166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.738422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.738439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.738679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.738696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.738890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.738906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.739082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.739098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.739301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.739317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.739516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.739532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.739743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.739760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.739986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.740002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.740253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.740270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.740548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.740565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.740826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.740844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.741932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.741948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.742983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.742999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.743113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.743130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.743307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.743324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.743422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.743438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.743608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.743625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.743790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.743807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.392 [2024-06-10 12:18:05.744957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.392 [2024-06-10 12:18:05.744973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.392 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.745083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.745099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.745264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.745280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.745389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.745405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.745632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.745649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.745815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.745831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.745968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.745984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.746239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.746255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.746363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.746380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.746481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.746497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.746664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.746680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.746783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.746799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.746921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.746937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.747921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.747937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.748051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.748233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.748350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.748533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.748628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.748893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.748983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.749900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.749999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.750128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.750250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.750363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.750504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.750678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.393 qpair failed and we were unable to recover it. 00:29:16.393 [2024-06-10 12:18:05.750846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.393 [2024-06-10 12:18:05.750862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.751946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.751963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.752982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.752998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.753114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.753133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.753289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.753306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.753411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.753427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.753535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.753552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.753736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.753753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.753926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.753942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.754133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.754149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.754345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.754362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.754470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.754491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.754675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.754691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.754789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.754805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.754926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.754942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.755068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.755185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.755448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.755577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.755699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.755889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.394 [2024-06-10 12:18:05.755986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.394 [2024-06-10 12:18:05.756003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.394 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.756144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.756161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.756274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.756289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.756445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.756461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.756564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.756580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.756678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.756695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.756942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.756958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.757133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.757149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.757248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.757264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.757487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.757504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.757664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.757680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.757837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.757852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.757922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.757944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.758107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.758123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.758275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.758292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.758429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.758445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.758640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.758656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.758818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.758834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.758924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.758939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.759942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.759958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.760051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.760067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.760161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.760177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.760283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.760300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.760482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.760498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.760656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.760671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.760912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.760929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.761104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.761120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.761223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.761239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.761419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.761436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.761605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.761622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.761728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.761744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.761836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.761852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.762006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.395 [2024-06-10 12:18:05.762022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.395 qpair failed and we were unable to recover it. 00:29:16.395 [2024-06-10 12:18:05.762136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.762153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.762368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.762384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.762488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.762514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.762643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.762659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.762769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.762785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.762941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.762957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.763076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.763092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.763268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.763285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.763534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.763555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.763734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.763754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.763925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.763944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.764270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.764291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.764473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.764495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.764675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.764692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.764838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.764853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.764974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.764990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.765236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.765253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.765506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.765522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.765773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.765789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.765965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.765981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.766278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.766295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.766465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.766485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.766607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.766623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.766844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.766863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.766981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.766997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.767183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.767199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.767317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.767332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.767497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.767515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.767711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.767727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.767958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.767975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.768178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.768194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.768297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.768312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.768411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.768427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.768603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.768620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.768797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.768814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.768971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.768987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.769273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.769289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.769456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.769472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.396 [2024-06-10 12:18:05.769641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.396 [2024-06-10 12:18:05.769657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.396 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.769855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.769871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.770001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.770017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.770189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.770205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.770318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.770335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.770631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.770648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.770808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.770825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.771009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.771025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.771268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.771284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.771452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.771468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.771674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.771691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.771812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.771829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.772005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.772022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.772181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.772197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.772299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.772315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.772419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.772435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.772712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.772729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.772833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.772849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.773071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.773087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.773269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.773286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.773529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.773546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.773769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.773786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.773940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.773956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.774131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.774148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.774235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.774251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.774371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.774389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.774559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.774576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.774726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.774742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.774856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.774872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.775070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.775087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.775316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.775333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.775566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.775582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.775675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.775691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.775863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.775880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.775993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.776009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.776137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.776153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.776418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.776434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.776546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.776563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.776746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.397 [2024-06-10 12:18:05.776763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.397 qpair failed and we were unable to recover it. 00:29:16.397 [2024-06-10 12:18:05.776941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.776957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.777979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.777995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.778146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.778163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.778387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.778403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.778517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.778534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.778649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.778665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.778839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.778856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.778951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.778972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.779136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.779153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.779325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.779342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.779500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.779517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.779683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.779700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.779872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.779888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.780005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.780021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.780219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.780235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.780405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.780421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.780662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.780678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.780797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.780813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.780991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.781007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.781105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.781122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.781295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.781314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.781485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.781502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.781692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.781708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.781829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.781846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.782012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.782028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.782180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.782196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.782448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.782465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.782672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.782688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.782795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.782811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.782900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.782916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.783021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.783037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.783283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.783299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.398 [2024-06-10 12:18:05.783413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.398 [2024-06-10 12:18:05.783429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.398 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.783539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.783556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.783729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.783746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.783909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.783925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.784091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.784107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.784308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.784325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.784449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.784465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.784587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.784604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.784759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.784776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.784972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.784989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.785210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.785226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.785391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.785407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.785504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.785521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.785745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.785762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.785937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.785953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.786210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.786227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.786331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.786348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.786519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.786535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.786638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.786654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.786897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.786913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.787960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.787976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.788100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.788117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.788278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.788297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.788407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.788423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.788596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.788613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.788726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.788742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.788908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.399 [2024-06-10 12:18:05.788924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.399 qpair failed and we were unable to recover it. 00:29:16.399 [2024-06-10 12:18:05.789111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.789128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.789308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.789324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.789430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.789446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.789542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.789559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.789675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.789695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.789926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.789942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.790117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.790133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.790231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.790247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.790480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.790497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.790599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.790616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.790787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.790804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.791048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.791180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.791299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.791473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.791727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.791832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.791987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.792178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.792291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.792475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.792604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.792729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.792865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.792882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.793060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.793077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.793249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.793265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.793376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.793392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.793498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.793515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.793611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.793628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.793902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.793918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.794025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.794041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.794274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.794290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.794400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.794417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.794583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.794600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.794783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.794799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.794968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.794984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.795096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.795112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.795228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.795244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.400 [2024-06-10 12:18:05.795465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.400 [2024-06-10 12:18:05.795488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.400 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.795680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.795697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.795787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.795803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.795962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.795978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.796171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.796188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.796315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.796331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.796505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.796522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.796727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.796744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.796852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.796868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.796972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.796989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.797165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.797181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.797368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.797387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.797495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.797512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.797617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.797633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.797745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.797761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.797927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.797943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.798111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.798127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.798319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.798335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.798437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.798453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.798736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.798753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.798936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.798952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.799059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.799076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.799227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.799243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.799470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.799491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.799591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.799607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.799719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.799736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.799899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.799915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.800077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.800093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.800251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.800267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.800386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.800402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.800578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.800595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.800768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.800784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.800952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.800968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.801181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.801197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.801290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.801306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.801433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.801449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.801569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.801587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.801748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.801765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.801876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.801892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.401 [2024-06-10 12:18:05.802053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.401 [2024-06-10 12:18:05.802069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.401 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.802159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.802176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.802402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.802418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.802513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.802530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.802706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.802723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.802850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.802867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.802960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.802975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.803078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.803094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.803191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.803206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.803303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.803319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.803567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.803584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.803806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.803822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.803926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.803942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.804111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.804127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.804301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.804317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.804486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.804503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.804622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.804739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.804755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.804851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.804867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.805978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.805995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.806170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.806186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.806419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.806435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.806698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.806715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.806943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.806959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.807064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.807080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.807345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.807361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.807535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.807551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.807709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.807725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.807882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.807899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.807984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.808000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.808170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.808186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.808421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.808437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.808598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.808615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.808782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.402 [2024-06-10 12:18:05.808798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.402 qpair failed and we were unable to recover it. 00:29:16.402 [2024-06-10 12:18:05.808954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.808973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.809135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.809151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.809266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.809282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.809456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.809472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.809656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.809672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.809840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.809857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.810015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.810032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.810248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.810264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.810530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.810547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.810718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.810735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.810890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.810906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.811068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.811084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.811200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.811216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.811311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.811327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.811443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.811459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.811642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.811659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.811814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.811830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.812806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.812988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.813004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.813164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.813180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.813407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.813424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.813601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.813618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.813729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.813745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.813901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.813917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.814019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.814036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.814144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.814160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.814249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.403 [2024-06-10 12:18:05.814265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.403 qpair failed and we were unable to recover it. 00:29:16.403 [2024-06-10 12:18:05.814383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.814399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.814499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.814515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.814687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.814704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.814894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.814911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.815911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.815927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.816920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.816937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.817949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.817965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.818122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.818138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.818282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.818298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.818371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.818387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.818637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.818654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.818764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.818781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.818942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.818959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.819070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.819085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.819248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.819265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.819350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.819365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.819453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.819469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.819578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.819594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.819762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.819778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.820024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.820040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.820204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.820221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.404 [2024-06-10 12:18:05.820310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.404 [2024-06-10 12:18:05.820325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.404 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.820438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.820453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.820560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.820576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.820682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.820698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.820926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.820942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.821890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.821906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.822014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.822030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.822224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.822240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.822349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.822365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.822467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.822488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.822743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.822758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.822916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.822932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.823117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.823134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.823286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.823303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.823506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.823522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.823632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.823648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.823839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.823855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.824006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.824022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.824203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.824220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.824405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.824422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.824534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.824550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.824796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.824812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.824923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.824937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.825055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.825070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.825246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.825262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.825506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.825522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.825632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.825647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.825757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.825772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.825944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.825960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.826134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.826150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.826380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.826396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.826661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.826678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.826841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.826857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.827149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.827166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.827387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.827403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.827506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.827523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.405 qpair failed and we were unable to recover it. 00:29:16.405 [2024-06-10 12:18:05.827623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.405 [2024-06-10 12:18:05.827639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.827811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.827827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.827944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.827961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.828181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.828197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.828326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.828342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.828519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.828535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.828696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.828715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.828825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.828841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.829016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.829033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.829160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.829176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.829266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.829284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.829486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.829502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.829750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.829766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.829968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.829984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.830179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.830197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.830396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.830413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.830698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.830715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.830947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.830964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.831239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.831256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.831508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.831525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.831781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.831797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.832018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.832035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.832286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.832303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.832495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.832512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.832681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.832698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.832870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.832886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.833087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.833104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.833350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.833367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.833501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.833517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.833698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.833715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.833828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.833844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.833935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.833951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.834221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.834238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.834439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.834455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.834625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.834642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.834757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.834774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.835002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.835019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.835239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.835256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.835371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.835387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.835641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.835660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.835844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.835861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.835976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.406 [2024-06-10 12:18:05.835992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.406 qpair failed and we were unable to recover it. 00:29:16.406 [2024-06-10 12:18:05.836128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.836144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.836248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.836265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.836465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.836485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.836658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.836674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.836843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.836861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.836999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.837016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.837305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.837322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.837426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.837442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.837719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.837736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.837863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.837879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.838157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.838173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.838401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.838418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.838653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.838670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.838893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.838909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.839113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.839129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.839419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.839435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.839698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.839715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.839887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.839903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.840064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.840079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.840259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.840275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.840442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.840459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.840539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.840555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.840666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.840683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.840924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.840941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.841977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.841993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.842163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.842179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.842462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.842483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.842667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.842683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.842803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.842819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.842938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.842955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.843067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.843082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.843258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.843275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.843499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.843515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.843694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.843710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.843879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.843895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.407 [2024-06-10 12:18:05.844002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.407 [2024-06-10 12:18:05.844019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.407 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.844124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.844240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.844380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.844506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.844639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.844836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.844984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.845159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.845283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.845525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.845649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.845765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.845941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.845957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.846123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.846139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.846367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.846383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.846547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.846564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.846685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.846701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.846974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.846991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.847082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.847098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.847263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.847279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.847445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.847461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.847625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.847641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.847749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.847765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.847928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.847945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.848050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.848067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.848256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.848272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.848498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.408 [2024-06-10 12:18:05.848515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.408 qpair failed and we were unable to recover it. 00:29:16.408 [2024-06-10 12:18:05.848672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.848689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.848916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.848932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.849050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.849153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.849339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.849463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.849641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.849813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.849988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.850107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.850347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.850544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.850667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.850837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.850977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.850994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.851116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.851252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.851375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.851512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.851696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.851823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.851999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.852827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.852843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.853015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.853032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.853134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.853151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.853422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.853439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.853566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.853583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.853822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.853843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.853973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.853990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.854113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.854129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.854248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.854265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.409 qpair failed and we were unable to recover it. 00:29:16.409 [2024-06-10 12:18:05.854426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.409 [2024-06-10 12:18:05.854441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.854616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.854634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.854791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.854808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.854914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.854930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.855012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.855028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.855150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.855167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.855335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.855354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.855432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.855449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.855618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.855642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.410 [2024-06-10 12:18:05.855752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.410 [2024-06-10 12:18:05.855769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.410 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.855950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.855968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.856092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.856197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.856394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.856523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.856707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.856879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.856997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.857012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.857212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.857232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.857348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.857364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.857462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.857483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.857652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.857668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.857783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.857799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.857990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.858131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.858254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.858439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.858620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.858745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.858869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.858886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.859945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.859960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.860052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.860068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.860234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.860249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.860415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.860432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.860535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.860552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.694 qpair failed and we were unable to recover it. 00:29:16.694 [2024-06-10 12:18:05.860707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.694 [2024-06-10 12:18:05.860722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.860825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.860841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.860936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.860951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.861839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.861855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.862977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.862996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.863110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.863125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.863293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.863310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.863468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.863488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.863585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.863601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.863685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.863700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.863807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.863824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.864907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.864922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.865169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.865359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.865524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.865634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.865742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.695 [2024-06-10 12:18:05.865858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.695 qpair failed and we were unable to recover it. 00:29:16.695 [2024-06-10 12:18:05.865958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.865974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.866169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.866185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.866340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.866356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.866516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.866533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.866650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.866666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.866832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.866849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.866948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.866965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.867083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.867099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.867192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.867208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.867483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.867499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.867583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.867600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.867717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.867733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.867827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.867843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.868014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.868031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.868202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.868218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.868314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.868330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.868509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.868526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.868700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.868717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.868873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.868890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.869905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.869921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.870030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.870046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.870147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.870163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.870332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.870349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.870591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.870608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.870703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.870719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.870944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.870961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.871115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.871132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.871280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.871296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.871416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.871432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.871584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.871601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.871762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.696 [2024-06-10 12:18:05.871778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.696 qpair failed and we were unable to recover it. 00:29:16.696 [2024-06-10 12:18:05.871949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.871965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.872953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.872969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.873070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.873086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.873242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.873259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.873461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.873505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.873723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.873748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.873863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.873880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.874892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.874908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.875018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.875034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.875205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.875222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.875439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.875455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.875567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.875584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.875681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.875698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.875856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.875872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.876034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.876051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.876145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.876162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.876369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.876386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.876533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.876550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.876724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.876741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.876909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.876925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.877061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.877077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.877258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.877274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.877360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.877376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.877547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.877563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.877680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.877696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.697 [2024-06-10 12:18:05.877876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.697 [2024-06-10 12:18:05.877898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.697 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.877990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.878200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.878321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.878508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.878637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.878739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.878857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.878873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.879129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.879145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.879330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.879346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.879485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.879501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.879673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.879690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.879868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.879884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.879988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.880197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.880382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.880547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.880680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.880796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.880914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.880930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.881084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.881101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.881284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.881300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.881497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.881513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.881609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.881626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.881820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.881836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.882047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.882063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.882224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.882241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.882320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.882343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.882518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.882535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.882649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.698 [2024-06-10 12:18:05.882666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.698 qpair failed and we were unable to recover it. 00:29:16.698 [2024-06-10 12:18:05.882852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.882868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.882977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.882993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.883242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.883262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.883437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.883453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.883731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.883748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.883867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.883884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.883983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.883999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.884104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.884120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.884298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.884315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.884528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.884545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.884671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.884687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.884796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.884812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.884992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.885080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.885211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.885403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.885601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.885725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.885863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.885880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.886064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.886173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.886357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.886611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.886728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.886903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.886995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.887980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.887996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.888289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.888305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.888400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.888417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.888585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.888602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.888780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.888796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.889017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.699 [2024-06-10 12:18:05.889033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.699 qpair failed and we were unable to recover it. 00:29:16.699 [2024-06-10 12:18:05.889206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.889225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.889384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.889400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.889603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.889619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.889797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.889813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.889915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.889931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.890182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.890198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.890379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.890395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.890500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.890516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.890710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.890727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.890851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.890868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.890986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.891002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.891160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.891176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.891287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.891304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.891549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.891566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.891742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.891758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.891857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.891873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.892040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.892056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.892285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.892302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.892456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.892473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.892596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.892612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.892722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.892738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.892913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.892929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.893025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.893041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.893147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.893163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.893284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.893301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.893522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.893539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.893709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.893724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.893879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.893896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.894065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.894081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.894177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.894193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.894309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.894326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.894502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.894519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.894626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.894643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.894752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.894768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.895017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.895034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.895271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.895287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.895411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.895428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.895600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.895616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.700 qpair failed and we were unable to recover it. 00:29:16.700 [2024-06-10 12:18:05.895797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.700 [2024-06-10 12:18:05.895814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.895916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.895932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.896080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.896108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.896343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.896367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.896534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.896553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.896723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.896740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.896910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.896926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.897961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.897978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.898153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.898170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.898277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.898293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.898403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.898425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.898538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.898555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.898720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.898737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.898937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.898954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.899062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.899078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.899292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.899308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.899410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.899426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.899543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.899560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.899742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.899759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.899855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.899872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.900141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.900157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.900261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.900277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.900434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.900450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.900524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.900541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.900744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.900761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.900885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.900901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.901075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.901091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.901256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.901273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.901503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.901520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.901682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.901699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.901818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.901978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.901994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.701 [2024-06-10 12:18:05.902098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.701 [2024-06-10 12:18:05.902115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.701 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.902285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.902302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.902412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.902428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.902600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.902617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.902789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.902805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.902912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.902929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.903096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.903112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.903334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.903350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.903578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.903595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.903709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.903726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.903817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.903834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.904920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.904936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.905135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.905154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.905243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.905259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.905354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.905371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.905487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.905504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.905759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.905780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.905902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.905919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.906087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.906103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.906351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.906368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.906554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.906571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.906724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.906741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.906832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.906848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.906973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.906989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.907979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.907995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.908098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.908114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.702 qpair failed and we were unable to recover it. 00:29:16.702 [2024-06-10 12:18:05.908200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.702 [2024-06-10 12:18:05.908216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.908323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.908341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.908444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.908460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.908575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.908592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.908702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.908718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.908820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.908837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.909925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.909941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.910934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.910951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.911116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.911132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.911212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.911229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.911336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.911352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.911443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.911460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.911712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.911729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.911897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.911913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.912014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.912030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.912150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.912166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.912322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.912339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.912510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.912526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.912620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.912636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.703 [2024-06-10 12:18:05.912711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.703 [2024-06-10 12:18:05.912727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.703 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.912964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.912981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.913095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.913201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.913391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.913491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.913692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.913874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.913989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.914833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.914986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.915003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.915094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.915112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.915235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.915251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.915413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.915430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.915615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.915632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.915814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.915830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.915988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.916004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.916167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.916183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.916292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.916308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.916398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.916414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.916604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.916621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.916779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.916795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.917045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.917062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.917163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.917179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.917340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.917356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.917517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.917533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.917636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.917653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.917828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.917845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.918015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.918032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.918128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.918144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.918258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.918274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.918360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.918376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.918523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.918540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.704 qpair failed and we were unable to recover it. 00:29:16.704 [2024-06-10 12:18:05.918660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.704 [2024-06-10 12:18:05.918678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.918837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.918853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.918963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.918980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.919152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.919167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.919328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.919344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.919471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.919495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.919696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.919713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.919887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.919903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.920897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.920914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.921920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.921937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.922099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.922115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.922341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.922357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.922463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.922496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.922584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.922601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.922767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.922783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.922954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.922971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.923057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.923073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.923295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.923312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.923486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.923503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.923661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.923678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.923757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.923773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.923896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.923912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.924000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.924017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.924185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.924201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.924314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.924330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.924493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.705 [2024-06-10 12:18:05.924509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.705 qpair failed and we were unable to recover it. 00:29:16.705 [2024-06-10 12:18:05.924595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.924612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.924835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.924852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.925076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.925092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.925346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.925363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.925612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.925629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.925712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.925728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.925890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.925907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.926909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.926927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.927157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.927174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.927288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.927304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.927481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.927498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.927596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.927613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.927825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.927841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.928961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.928978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.929096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.929111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.929282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.929299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.929398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.929414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.929530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.929547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.929722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.929739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.929904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.929920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.930094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.930110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.930275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.930291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.930411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.930436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.930531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.930548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.930642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.706 [2024-06-10 12:18:05.930659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.706 qpair failed and we were unable to recover it. 00:29:16.706 [2024-06-10 12:18:05.930818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.930836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.930947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.930962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.931869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.931885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.932095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.932112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.932209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.932226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.932398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.932415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.932515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.932531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.932632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.932648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.932842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.932859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.933918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.933934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.934037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.934053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.934229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.934251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.934529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.934547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.934707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.934724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.934917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.934933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.935130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.935146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.935330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.935346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.935537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.935554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.935675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.935692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.935925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.935942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.936033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.936220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.936363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.936498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.936688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.707 [2024-06-10 12:18:05.936807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.707 qpair failed and we were unable to recover it. 00:29:16.707 [2024-06-10 12:18:05.936969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.936985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.937090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.937113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.937233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.937250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.937410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.937427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.937533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.937551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.937764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.937781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.937906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.937922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.938980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.938996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.939164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.939180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.939287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.939304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.939411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.939427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.939606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.939623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.939717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.939734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.939895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.939912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.940833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.940850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.941008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.941025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.941139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.941156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.941256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.708 [2024-06-10 12:18:05.941272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.708 qpair failed and we were unable to recover it. 00:29:16.708 [2024-06-10 12:18:05.941379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.941396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.941640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.941657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.941760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.941776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.941963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.941981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.942154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.942171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.942259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.942275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.942467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.942488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.942591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.942607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.942795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.942812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.942904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.942921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.943813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.943830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.944056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.944073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.944182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.944199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.944316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.944332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.944498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.944516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.944625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.944641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.944833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.944850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.945056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.945163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.945432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.945601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.945777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.945899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.945989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.946017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.946284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.946301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.946403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.946418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.946594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.946612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.946737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.946755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.946941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.946960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.947067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.947084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.947190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.947206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.947394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.947411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.947580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.947598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.947729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.709 [2024-06-10 12:18:05.947745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.709 qpair failed and we were unable to recover it. 00:29:16.709 [2024-06-10 12:18:05.947851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.947867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.947968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.947984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.948146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.948164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.948343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.948359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.948523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.948541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.948634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.948651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.948878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.948895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.949077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.949106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.949294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.949311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.949410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.949425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.949569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.949587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.949769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.949786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.949903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.949919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.950931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.950948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.951046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.951062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.951243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.951260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.951332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.951349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.951605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.951622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.951782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.951798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.951955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.951972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.952868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.952885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.953062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.953078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.953264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.953284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.953404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.953421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.953521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.953538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.953718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.953735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.953883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.953900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.954845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.954861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.955936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.955952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.710 [2024-06-10 12:18:05.956175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.710 [2024-06-10 12:18:05.956191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.710 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.956436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.956453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.956584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.956601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.956704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.956720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.956882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.956899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.957882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.957988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.958004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.958200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.958217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.958375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.958392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.958501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.958519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.958680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.958697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.958798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.958814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.959947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.959963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.960922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.960938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.961914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.961930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.962035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.962051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.962212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.962229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.962411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.962427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.962625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.962642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.962753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.962773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.962929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.962946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.963038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.963054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.963234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.963251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.963444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.711 [2024-06-10 12:18:05.963461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.711 qpair failed and we were unable to recover it. 00:29:16.711 [2024-06-10 12:18:05.963565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.963582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.963681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.963698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.963811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.963828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.963935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.963952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.964843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.964860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.965944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.965961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.966129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.966145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.966370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.966387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.966566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.966582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.966759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.966776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.966948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.966965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.967136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.967152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.967272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.967289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.967455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.967472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.967598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.967630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.967804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.967822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.967999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.968185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.968299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.968484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.968608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.968733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.968915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.968933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.969904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.969921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.970015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.970032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.970205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.970222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.970444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.970461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.970641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.970658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.970772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.970789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.970895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.970911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.971072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.971089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.971199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.971215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.971371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.971387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.971559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.971576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.971804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.971821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.971987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.972003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.712 [2024-06-10 12:18:05.972098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.712 [2024-06-10 12:18:05.972114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.712 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.972215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.972231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.972390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.972406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.972508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.972525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.972606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.972622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.972728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.972744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.972997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.973013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.973168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.973184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.973278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.973295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.973369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.973385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.973601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.973618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.973777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.973794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.973998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.974914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.974931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.975882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.975899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.976068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.976085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.976201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.976218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.976383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.976400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.976514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.976532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.976710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.976726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.976884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.976901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.977030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.977164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.977403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.977585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.977714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.977826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.977987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.978929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.978946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.979103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.979120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.979282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.979299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.979466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.979487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.979717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.979733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.979845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.979861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.979998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.980015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.980242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.980259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.980370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.980387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.713 qpair failed and we were unable to recover it. 00:29:16.713 [2024-06-10 12:18:05.980563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.713 [2024-06-10 12:18:05.980580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.980708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.980724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.980813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.980830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.980928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.980945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.981910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.981999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.982016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.982250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.982267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.982439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.982456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.982652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.982669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.982765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.982781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.982945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.982962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.983960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.983977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.984152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.984169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.984329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.984346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.984467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.984491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.984601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.984617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.984794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.984810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.984968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.984985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.985159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.985177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.985287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.985304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.985411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.985427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.985537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.985555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.985807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.985824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.985928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.985944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.986110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.986127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f504c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.986302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.986322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.986509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.986528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.986623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.986640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.986802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.986819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.986939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.986956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.987944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.987961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.988116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.988132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.988253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.988270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.988467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.988489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.988647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.988663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.988772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.988788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.988904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.714 [2024-06-10 12:18:05.988920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.714 qpair failed and we were unable to recover it. 00:29:16.714 [2024-06-10 12:18:05.989015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.989031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.989194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.989210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.989337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.989353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.989465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.989488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.989584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.989601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.989848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.989865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.990094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.990110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.990220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.990236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.990400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.990416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.990602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.990619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.990865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.990881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.991854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.991872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.992956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.992973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.993095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.993111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.993209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.993225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.993310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.993327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.993510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.715 [2024-06-10 12:18:05.993527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.715 qpair failed and we were unable to recover it. 00:29:16.715 [2024-06-10 12:18:05.993620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.993636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.993721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.993737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.993856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.993872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.993944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.993960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.994979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.994995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.995154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.995171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.995273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.995289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.995451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.995467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.995558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.995575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.995732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.995748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.995852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.995869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.996939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.996955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.997931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.997947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.998041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.998057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.998212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.998228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.998318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.998334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.998440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.998457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.998628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.998644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.716 qpair failed and we were unable to recover it. 00:29:16.716 [2024-06-10 12:18:05.998739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.716 [2024-06-10 12:18:05.998754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.998856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.998872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:05.999920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:05.999936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.000083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.000099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.000260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.000276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.000392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.000408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.000657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.000674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.000836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.000852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.001032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.001049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.001205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.001221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.001351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.001367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.001458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.001474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.001722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.001738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.001828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.001843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.002981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.002997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.003121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.003137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.003256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.003272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.003399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.003416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.003540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.003556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.717 qpair failed and we were unable to recover it. 00:29:16.717 [2024-06-10 12:18:06.003733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.717 [2024-06-10 12:18:06.003749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.003970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.003987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.004145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.004163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.004376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.004393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.004563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.004580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.004732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.004748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.004840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.004857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.004947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.004963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.005199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.005216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.005393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.005409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.005585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.005603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.005703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.005719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.005898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.005914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.006084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.006101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.006218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.006234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.006354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.006371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.006506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.006522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.006770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.006786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.006881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.006897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.007981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.007998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.008101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.008116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.008311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.008327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.008578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.008595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.008778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.008795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.008902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.008917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.009889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.009905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.010073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.010089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.010195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.010211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.010382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.718 [2024-06-10 12:18:06.010398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.718 qpair failed and we were unable to recover it. 00:29:16.718 [2024-06-10 12:18:06.010520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.010537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.010693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.010711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.010857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.010873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.010979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.010995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.011236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.011253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.011411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.011427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.011551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.011568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.011672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.011688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.011784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.011801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.011898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.011914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.012920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.012936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.013116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.013133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.013217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.013233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.013322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.013339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.013511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.013528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.013635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.013651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.013831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.013848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.014006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.014022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.014288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.014305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.014414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.014431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.014551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.014567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.014747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.014764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.014856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.014873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.015032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.719 [2024-06-10 12:18:06.015049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.719 qpair failed and we were unable to recover it. 00:29:16.719 [2024-06-10 12:18:06.015219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.015236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.015460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.015480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.015621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.015638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.015813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.015830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.015942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.015958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.016061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.016077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.016325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.016341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.016446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.016462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.016642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.016659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.016853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.016870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.016978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.016994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.017097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.017116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.017339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.017356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.017558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.017575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.017799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.017816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.017917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.017933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.018116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.018132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.018296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.018312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.018470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.018490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.018663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.018679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.018771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.018788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.018957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.018973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.019115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.019132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.019319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.019335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.019504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.019520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.019704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.019720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.019954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.019970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.020173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.020190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.020293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.020309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.020404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.020421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.020590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.020607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.020711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.020727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.020974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.020990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.021140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.021156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.021314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.021330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.021503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.021520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.021693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.021709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.021882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.021898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.022018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.022034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.022192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.720 [2024-06-10 12:18:06.022208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.720 qpair failed and we were unable to recover it. 00:29:16.720 [2024-06-10 12:18:06.022294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.022310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.022420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.022436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.022559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.022576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.022680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.022696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.022797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.022813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.022927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.022943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.023167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.023183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.023293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.023309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.023524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.023540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.023662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.023679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.023774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.023790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.024030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.024048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.024159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.024175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.024352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.024369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.024528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.024545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.024701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.024717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.024936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.024953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.025124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.025140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.025303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.025319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.025414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.025430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.025534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.025550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.025712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.025729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.025897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.025913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.026971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.026988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.027110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.027126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.027303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.027318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.027408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.027425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.027589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.027605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.027708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.027724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.027846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.027861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.028053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.028069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.028185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.028201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.028302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.028318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.028447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.028464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.721 qpair failed and we were unable to recover it. 00:29:16.721 [2024-06-10 12:18:06.028733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.721 [2024-06-10 12:18:06.028750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.028924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.028940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.029979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.029995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.030159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.030275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.030404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:16.722 [2024-06-10 12:18:06.030511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.030613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.030743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:16.722 [2024-06-10 12:18:06.030868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.030884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.030987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.031102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:16.722 [2024-06-10 12:18:06.031210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.031401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.031641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.722 [2024-06-10 12:18:06.031771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.031866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.031881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.031998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.032013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.032093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.032109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.032232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.032248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.032495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.032511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.032733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.032750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.032915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.032932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.033985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.033998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.722 qpair failed and we were unable to recover it. 00:29:16.722 [2024-06-10 12:18:06.034094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.722 [2024-06-10 12:18:06.034107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.034967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.034979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.035845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.035858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.036887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.036899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.037174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.037186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.037421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.037433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.037500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.037512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.037680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.037693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.037801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.037813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.037916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.037928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.038037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.038049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.038216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.038228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.038458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.038470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.038708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.038721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.038794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.038806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.038974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.038987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.039097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.039109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.039199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.039210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.039422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.039434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.039550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.039563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.039666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.039679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.039836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.039848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.040044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.040057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.040156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.040167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.040326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.040338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.040612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.040625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.040736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.040749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.040916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.723 [2024-06-10 12:18:06.040927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.723 qpair failed and we were unable to recover it. 00:29:16.723 [2024-06-10 12:18:06.041099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.041269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.041439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.041533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.041709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.041797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.041972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.041985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.042187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.042200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.042372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.042384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.042532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.042544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.042663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.042676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.042873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.042885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.043057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.043069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.043218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.043230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.043367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.043379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.043639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.043653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.043816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.043828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.043950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.043963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.044081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.044094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.044277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.044290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.044533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.044545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.044681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.044693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.044765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.044777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.044927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.044939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.045147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.045160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.045309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.045322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.045558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.045571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.045671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.045684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.045840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.045852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.045983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.045997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.046240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.046254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.046416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.046429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.046641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.046654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.046871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.046883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.046996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.047008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.047138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.047150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.047432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.047444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.047658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.047670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.047754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.047766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.047937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.047949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.724 [2024-06-10 12:18:06.048073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.724 [2024-06-10 12:18:06.048085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.724 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.048185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.048197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.048305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.048318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.048470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.048489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.048668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.048681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.048798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.048811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.048913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.048925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.049082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.049095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.049179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.049191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.049353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.049365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.049563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.049576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.049795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.049807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.049891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.049903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.050944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.050957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.051895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.051907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.052951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.052964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.053949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.053961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.054077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.054089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.054172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.054184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.054399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.725 [2024-06-10 12:18:06.054411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.725 qpair failed and we were unable to recover it. 00:29:16.725 [2024-06-10 12:18:06.054518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.054530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.054625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.054637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.054719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.054731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.054841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.054853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.054949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.054961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.055879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.055890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.056936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.056948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.057947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.057959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.058939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.058951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.059943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.059955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.060061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.060072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.060154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.726 [2024-06-10 12:18:06.060166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.726 qpair failed and we were unable to recover it. 00:29:16.726 [2024-06-10 12:18:06.060254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.060912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.060996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.061903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.061997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.062883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.062895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.063884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.063896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.727 [2024-06-10 12:18:06.064702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.727 [2024-06-10 12:18:06.064715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.727 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.064816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.064828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.064926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.064939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.065959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.065971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.066923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.066935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.067822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.067995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.068007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.068248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.068259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.068417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.068430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.068624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.068636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.068825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.068837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.068934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.068946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.069151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.069163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.069276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.069288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.069555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.069567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.069675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.069688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.069794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.069807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.069907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.069918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.070934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.070946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.071104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.071116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.071327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.071340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.071568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.071581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.071677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.728 [2024-06-10 12:18:06.071689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.728 qpair failed and we were unable to recover it. 00:29:16.728 [2024-06-10 12:18:06.071908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.071920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.072924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.072937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.073202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.073214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.073454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.073466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.073644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.073656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.073778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.073790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.073947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.073959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.074049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.074061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.074207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.074219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.074389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.074402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.074555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.074574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.074697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.074709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.074853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.074865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.075032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.075044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.075332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.075346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.729 [2024-06-10 12:18:06.075599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.075613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.075717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.075730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:16.729 [2024-06-10 12:18:06.075892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.075906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.076006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.076018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.729 [2024-06-10 12:18:06.076176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.076190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.729 [2024-06-10 12:18:06.076402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.076416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.076602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.076615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.076785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.076797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.076888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.076900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.077007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.077019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.077257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.077270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.077430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.077442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.077607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.077620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.077737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.077749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.077899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.077911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.078091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.078103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.078300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.078312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.078472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.078501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.078665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.078678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.078771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.078784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.078948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.078962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.079075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.079087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.079187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.079199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.079413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.079425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.079644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.079656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.079776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.079788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.079907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.079919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.080007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.080018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.080281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.729 [2024-06-10 12:18:06.080293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.729 qpair failed and we were unable to recover it. 00:29:16.729 [2024-06-10 12:18:06.080452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.080465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.080634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.080646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.080784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.080796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.080953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.080964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.081237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.081249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.081468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.081484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.081606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.081618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.081738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.081750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.081896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.081908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.082059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.082072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.082145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.082157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.082395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.082407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.082620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.082632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.082803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.082816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.082911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.082924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.083170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.083182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.083396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.083409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.083570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.083583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.083761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.083774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.083878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.083890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.084915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.084927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.085003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.085015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.085107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.085119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.085337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.085350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.085447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.085459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.085676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.085688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.085838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.085851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.086970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.086982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.087144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.087157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.087398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.087411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.087564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.087577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.087742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.087754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.087921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.087933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.088104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.088117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.088355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.088368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.088591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.088605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.088721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.088734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.088973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.088985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.089157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.089169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.089267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.089279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.730 qpair failed and we were unable to recover it. 00:29:16.730 [2024-06-10 12:18:06.089401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.730 [2024-06-10 12:18:06.089413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.089530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.089543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.089658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.089672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.089830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.089843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.090979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.090991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.091868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.091881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.092984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.092996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.093092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.093184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.093308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.093405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.093524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.093656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 Malloc0 00:29:16.731 [2024-06-10 12:18:06.093828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.093841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.731 [2024-06-10 12:18:06.094455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.094905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.094926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.095022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.731 [2024-06-10 12:18:06.095148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.095307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.731 [2024-06-10 12:18:06.095472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.095571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.095679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.731 qpair failed and we were unable to recover it. 00:29:16.731 [2024-06-10 12:18:06.095864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.731 [2024-06-10 12:18:06.095876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.095970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.095983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.096892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.096904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.097077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.097089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.097276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.097289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.097402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.097414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.097629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.097642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.097792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.097804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.097898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.097910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.098833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.098845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.099955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.099966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.100904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.100915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.732 [2024-06-10 12:18:06.101537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.101958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.101970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.102911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.102923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.103024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.103036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.103208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.103221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.732 [2024-06-10 12:18:06.103306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.732 [2024-06-10 12:18:06.103318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.732 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.103403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.103415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.103511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.103523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.103707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.103718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.103816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.103828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.103907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.103919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.104928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.104940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.105939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.105951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.106982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.106993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.107090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.107102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.107188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.107200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.107417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.107430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.107652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.107664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.107838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.107851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.108896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.108908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.109003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.109014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.109095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.109107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.109282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.733 [2024-06-10 12:18:06.109294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.733 qpair failed and we were unable to recover it. 00:29:16.733 [2024-06-10 12:18:06.109390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.109402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.109552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.109565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.109722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.109734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.109827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.109839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.109926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.109938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.110140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.110153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.734 [2024-06-10 12:18:06.110304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.110317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.110404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.110416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.734 [2024-06-10 12:18:06.110586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.110600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.110717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.110729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.734 [2024-06-10 12:18:06.110901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.110914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.111129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.734 [2024-06-10 12:18:06.111143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.111224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.111236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.111384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.111396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.111549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.111561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.111777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.111789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88cfc0 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.112880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.112988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.113004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.113163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.113179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.113358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.113374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.113551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.113569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.113747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.113764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.113935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.113951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f503c000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.114052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.114065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.114186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.734 [2024-06-10 12:18:06.114198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.734 qpair failed and we were unable to recover it. 00:29:16.734 [2024-06-10 12:18:06.114267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.114279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.114359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.114371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.114603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.114616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.114829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.114841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.114949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.114962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.115982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.115994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.116910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.116922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.117016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.117028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.117118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.117129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.117286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.117298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.117445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.117456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.117570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.735 [2024-06-10 12:18:06.117583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.735 qpair failed and we were unable to recover it. 00:29:16.735 [2024-06-10 12:18:06.117731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.117743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.117855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.117867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.118015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.118138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.118314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.736 [2024-06-10 12:18:06.118481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.118590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.118679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.736 [2024-06-10 12:18:06.118856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.118870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.736 [2024-06-10 12:18:06.119146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.119160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.119297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.119310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.736 [2024-06-10 12:18:06.119498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.119512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.119668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.119680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.119895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.119907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.120196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.120208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.120434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.120446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.120610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.120623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.120766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.120778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.121021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.121033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.121264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.121276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.121488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.121500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.121658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.121670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.121850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.121862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.122078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.122090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.122197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.122209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.122448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.122460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.122717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.122729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.122896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.122907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.123021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.123219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.123406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.123587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.123764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.123887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.123991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.124003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.124191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.124203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.736 qpair failed and we were unable to recover it. 00:29:16.736 [2024-06-10 12:18:06.124350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.736 [2024-06-10 12:18:06.124362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.124563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.124576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.124755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.124766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.124988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.125000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.125088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.125099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.125279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.125291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.125528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.125541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.125631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.125643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.125808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.125820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.126042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.126054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.126293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.126305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.737 [2024-06-10 12:18:06.126475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.126492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.126707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.126720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.737 [2024-06-10 12:18:06.126869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.126883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.127051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.127064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.737 [2024-06-10 12:18:06.127300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.127312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.737 [2024-06-10 12:18:06.127441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.127453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.127690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.127703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.127862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.127874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.127988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.128000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.128155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.128168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.128407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.128419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.128625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.128638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.128754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.128766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.128858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.128870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.129030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.129042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.129193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.129204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.129382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.129394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.129541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.737 [2024-06-10 12:18:06.129553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5044000b90 with addr=10.0.0.2, port=4420 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 [2024-06-10 12:18:06.129742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.737 [2024-06-10 12:18:06.132076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.737 [2024-06-10 12:18:06.132225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.737 [2024-06-10 12:18:06.132245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.737 [2024-06-10 12:18:06.132256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.737 [2024-06-10 12:18:06.132265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.737 [2024-06-10 12:18:06.132288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.737 [2024-06-10 12:18:06.142030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.737 [2024-06-10 12:18:06.142102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.737 [2024-06-10 12:18:06.142121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.737 [2024-06-10 12:18:06.142131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.737 [2024-06-10 12:18:06.142140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.737 [2024-06-10 12:18:06.142159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.737 qpair failed and we were unable to recover it. 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.737 12:18:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2384328 00:29:16.738 [2024-06-10 12:18:06.152057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.738 [2024-06-10 12:18:06.152115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.738 [2024-06-10 12:18:06.152133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.738 [2024-06-10 12:18:06.152147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.738 [2024-06-10 12:18:06.152155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.738 [2024-06-10 12:18:06.152173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.738 qpair failed and we were unable to recover it. 00:29:16.738 [2024-06-10 12:18:06.161996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.738 [2024-06-10 12:18:06.162061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.738 [2024-06-10 12:18:06.162078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.738 [2024-06-10 12:18:06.162088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.738 [2024-06-10 12:18:06.162097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.738 [2024-06-10 12:18:06.162116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.738 qpair failed and we were unable to recover it. 00:29:16.738 [2024-06-10 12:18:06.172072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.738 [2024-06-10 12:18:06.172137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.738 [2024-06-10 12:18:06.172155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.738 [2024-06-10 12:18:06.172164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.738 [2024-06-10 12:18:06.172173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.738 [2024-06-10 12:18:06.172192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.738 qpair failed and we were unable to recover it. 00:29:16.738 [2024-06-10 12:18:06.182066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.738 [2024-06-10 12:18:06.182125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.738 [2024-06-10 12:18:06.182143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.738 [2024-06-10 12:18:06.182152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.738 [2024-06-10 12:18:06.182161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.738 [2024-06-10 12:18:06.182179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.738 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.192101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.192157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.192175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.192184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.192193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.192211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.202063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.202149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.202167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.202177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.202185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.202203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.212119] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.212207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.212225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.212234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.212242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.212260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.222160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.222219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.222237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.222246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.222255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.222273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.232181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.232234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.232251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.232261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.232269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.232288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.242224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.242310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.242331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.242340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.242349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.242367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.252327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.252428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.252446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.252456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.252464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.252486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.262288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.262347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.262364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.262373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.262382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.262399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.272308] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.272362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.998 [2024-06-10 12:18:06.272380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.998 [2024-06-10 12:18:06.272389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.998 [2024-06-10 12:18:06.272397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.998 [2024-06-10 12:18:06.272415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.998 qpair failed and we were unable to recover it. 00:29:16.998 [2024-06-10 12:18:06.282315] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.998 [2024-06-10 12:18:06.282377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.282393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.282403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.282411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.282435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.292357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.292414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.292431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.292440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.292448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.292466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.302395] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.302455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.302472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.302484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.302493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.302511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.312491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.312547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.312563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.312572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.312581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.312599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.322425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.322492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.322509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.322518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.322530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.322548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.332460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.332523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.332542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.332552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.332560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.332580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.342501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.342560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.342577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.342586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.342594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.342612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.352555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.352608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.352625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.352634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.352642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.352660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.362576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.362636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.362652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.362661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.362670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.362688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.372590] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.372651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.372668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.372677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.372688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.372706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.382822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.382895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.382912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.382921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.382930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.382951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.392725] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.392786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.392803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.392812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.392821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.392838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.402715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.402776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.402792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.402801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.402810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.402827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:16.999 [2024-06-10 12:18:06.412760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.999 [2024-06-10 12:18:06.412820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.999 [2024-06-10 12:18:06.412837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.999 [2024-06-10 12:18:06.412846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.999 [2024-06-10 12:18:06.412855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:16.999 [2024-06-10 12:18:06.412873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.999 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.422719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.422780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.422797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.422806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.422814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.422832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.432796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.432857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.432874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.432884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.432893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.432910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.442764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.442824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.442841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.442849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.442858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.442875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.452789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.452851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.452868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.452877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.452886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.452903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.462828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.462900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.462917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.462930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.462939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.462957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.472899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.472955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.472972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.472981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.472990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.473009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.482844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.482909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.482926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.482935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.482944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.482963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.492908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.492972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.492988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.492997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.493006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.493024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.502925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.502982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.502998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.503008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.503018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.503035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.000 [2024-06-10 12:18:06.512927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.000 [2024-06-10 12:18:06.512987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.000 [2024-06-10 12:18:06.513004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.000 [2024-06-10 12:18:06.513013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.000 [2024-06-10 12:18:06.513022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.000 [2024-06-10 12:18:06.513040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.000 qpair failed and we were unable to recover it. 00:29:17.260 [2024-06-10 12:18:06.522992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.260 [2024-06-10 12:18:06.523059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.260 [2024-06-10 12:18:06.523075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.260 [2024-06-10 12:18:06.523085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.260 [2024-06-10 12:18:06.523094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.260 [2024-06-10 12:18:06.523112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.260 qpair failed and we were unable to recover it. 00:29:17.260 [2024-06-10 12:18:06.533012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.260 [2024-06-10 12:18:06.533106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.260 [2024-06-10 12:18:06.533124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.260 [2024-06-10 12:18:06.533134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.260 [2024-06-10 12:18:06.533143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.260 [2024-06-10 12:18:06.533161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.260 qpair failed and we were unable to recover it. 00:29:17.260 [2024-06-10 12:18:06.543040] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.260 [2024-06-10 12:18:06.543098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.260 [2024-06-10 12:18:06.543115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.260 [2024-06-10 12:18:06.543124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.260 [2024-06-10 12:18:06.543133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.260 [2024-06-10 12:18:06.543150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.260 qpair failed and we were unable to recover it. 00:29:17.260 [2024-06-10 12:18:06.553069] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.260 [2024-06-10 12:18:06.553124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.260 [2024-06-10 12:18:06.553141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.260 [2024-06-10 12:18:06.553153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.260 [2024-06-10 12:18:06.553162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.260 [2024-06-10 12:18:06.553179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.260 qpair failed and we were unable to recover it. 00:29:17.260 [2024-06-10 12:18:06.563100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.260 [2024-06-10 12:18:06.563160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.260 [2024-06-10 12:18:06.563176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.260 [2024-06-10 12:18:06.563185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.260 [2024-06-10 12:18:06.563194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.260 [2024-06-10 12:18:06.563211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.260 qpair failed and we were unable to recover it. 00:29:17.260 [2024-06-10 12:18:06.573129] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.260 [2024-06-10 12:18:06.573189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.260 [2024-06-10 12:18:06.573205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.260 [2024-06-10 12:18:06.573215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.260 [2024-06-10 12:18:06.573224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.260 [2024-06-10 12:18:06.573241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.583216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.583273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.583290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.583299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.583307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.583325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.593228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.593292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.593309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.593318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.593331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.593349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.603214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.603275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.603291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.603301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.603309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.603327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.613242] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.613302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.613319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.613329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.613337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.613355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.623266] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.623323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.623339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.623348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.623357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.623374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.633306] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.633359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.633376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.633385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.633393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.633411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.643329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.643390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.643410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.643419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.643428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.643445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.653359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.653421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.653437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.653446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.653455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.653472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.663402] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.663456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.663472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.663485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.663494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.663512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.673425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.673480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.673496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.673506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.673514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.673532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.683451] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.683516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.683534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.683543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.683555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.683576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.693473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.693536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.693552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.693562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.693571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.693588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.703508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.703561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.703577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.261 [2024-06-10 12:18:06.703586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.261 [2024-06-10 12:18:06.703595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.261 [2024-06-10 12:18:06.703612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.261 qpair failed and we were unable to recover it. 00:29:17.261 [2024-06-10 12:18:06.713568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.261 [2024-06-10 12:18:06.713623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.261 [2024-06-10 12:18:06.713639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.713648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.713657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.713675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.262 [2024-06-10 12:18:06.723588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.262 [2024-06-10 12:18:06.723648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.262 [2024-06-10 12:18:06.723664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.723673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.723682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.723700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.262 [2024-06-10 12:18:06.733597] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.262 [2024-06-10 12:18:06.733661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.262 [2024-06-10 12:18:06.733680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.733689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.733701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.733719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.262 [2024-06-10 12:18:06.743630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.262 [2024-06-10 12:18:06.743687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.262 [2024-06-10 12:18:06.743703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.743712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.743721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.743738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.262 [2024-06-10 12:18:06.753661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.262 [2024-06-10 12:18:06.753719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.262 [2024-06-10 12:18:06.753735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.753744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.753753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.753771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.262 [2024-06-10 12:18:06.763690] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.262 [2024-06-10 12:18:06.763790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.262 [2024-06-10 12:18:06.763807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.763817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.763825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.763843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.262 [2024-06-10 12:18:06.773712] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.262 [2024-06-10 12:18:06.773773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.262 [2024-06-10 12:18:06.773790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.262 [2024-06-10 12:18:06.773799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.262 [2024-06-10 12:18:06.773811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.262 [2024-06-10 12:18:06.773831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.262 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.783742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.783799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.783815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.783825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.783833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.783851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.793773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.793830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.793847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.793856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.793864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.793882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.803816] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.803876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.803892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.803901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.803910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.803928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.813835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.813893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.813909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.813918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.813927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.813945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.823858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.823915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.823931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.823943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.823952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.823969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.833887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.833942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.833959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.833968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.833976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.833994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.843917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.843973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.843989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.843998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.844007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.844025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.853942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.854003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.854019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.854028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.854037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.854054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.863980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.864036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.864052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.864062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.864073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.864091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.522 [2024-06-10 12:18:06.874019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.522 [2024-06-10 12:18:06.874073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.522 [2024-06-10 12:18:06.874090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.522 [2024-06-10 12:18:06.874099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.522 [2024-06-10 12:18:06.874108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.522 [2024-06-10 12:18:06.874126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.522 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.884096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.884154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.884171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.884180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.884188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.884206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.894001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.894064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.894081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.894090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.894099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.894115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.904122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.904180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.904196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.904205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.904214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.904231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.914113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.914165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.914183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.914192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.914201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.914219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.924152] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.924212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.924228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.924238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.924247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.924264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.934206] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.934269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.934286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.934295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.934304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.934321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.944205] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.944262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.944278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.944287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.944296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.944314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.954252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.954307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.954324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.954336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.954344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.954361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.964253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.964331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.964349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.964358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.964366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.964384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.974346] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.974399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.974416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.974425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.974434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.974452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.984302] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.984360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.984377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.984386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.984395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.984413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:06.994336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:06.994406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:06.994424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:06.994433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:06.994441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:06.994459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:07.004362] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:07.004449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.523 [2024-06-10 12:18:07.004466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.523 [2024-06-10 12:18:07.004479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.523 [2024-06-10 12:18:07.004489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.523 [2024-06-10 12:18:07.004507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.523 qpair failed and we were unable to recover it. 00:29:17.523 [2024-06-10 12:18:07.014385] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.523 [2024-06-10 12:18:07.014443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.524 [2024-06-10 12:18:07.014460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.524 [2024-06-10 12:18:07.014469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.524 [2024-06-10 12:18:07.014482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.524 [2024-06-10 12:18:07.014501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.524 qpair failed and we were unable to recover it. 00:29:17.524 [2024-06-10 12:18:07.024423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.524 [2024-06-10 12:18:07.024517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.524 [2024-06-10 12:18:07.024535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.524 [2024-06-10 12:18:07.024544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.524 [2024-06-10 12:18:07.024552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.524 [2024-06-10 12:18:07.024569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.524 qpair failed and we were unable to recover it. 00:29:17.524 [2024-06-10 12:18:07.034445] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.524 [2024-06-10 12:18:07.034510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.524 [2024-06-10 12:18:07.034527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.524 [2024-06-10 12:18:07.034536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.524 [2024-06-10 12:18:07.034549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.524 [2024-06-10 12:18:07.034567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.524 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.044413] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.044474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.044497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.044507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.044515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.044533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.054503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.054602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.054620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.054629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.054638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.054655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.064539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.064596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.064613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.064622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.064631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.064648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.074484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.074540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.074557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.074566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.074574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.074593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.084595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.084650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.084666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.084676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.084684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.084705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.094615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.094677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.094694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.094703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.094712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.094729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.104650] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.104706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.104723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.104732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.104740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.104758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.114697] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.114761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.114778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.114788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.114801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.114818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.124700] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.124759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.124775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.124785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.124793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.124811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.134767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.134824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.134843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.134852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.134861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.134878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.144758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.144814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.144831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.144840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.144849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.144866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.154784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.154843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.154859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.154868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.154877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.154894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.784 qpair failed and we were unable to recover it. 00:29:17.784 [2024-06-10 12:18:07.164804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.784 [2024-06-10 12:18:07.164862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.784 [2024-06-10 12:18:07.164878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.784 [2024-06-10 12:18:07.164887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.784 [2024-06-10 12:18:07.164896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.784 [2024-06-10 12:18:07.164913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.174808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.174866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.174882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.174892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.174900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.174921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.184857] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.184916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.184933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.184942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.184951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.184968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.194887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.194944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.194960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.194969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.194978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.194996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.204916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.204972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.204989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.204998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.205007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.205024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.214912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.215001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.215017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.215027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.215035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.215052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.224944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.225032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.225050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.225059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.225069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.225086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.234992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.235057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.235075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.235084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.235092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.235114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.245067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.245125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.245144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.245155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.245164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.245182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.255051] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.255113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.255129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.255138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.255147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.255165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.265085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.265143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.265160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.265169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.265182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.265201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.275106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.275163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.275180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.275189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.275198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.275216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.285088] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.285168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.285185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.285194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.285203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.285221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:17.785 [2024-06-10 12:18:07.295164] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.785 [2024-06-10 12:18:07.295245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.785 [2024-06-10 12:18:07.295262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.785 [2024-06-10 12:18:07.295272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.785 [2024-06-10 12:18:07.295281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:17.785 [2024-06-10 12:18:07.295299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.785 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.305211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.305298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.305316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.305326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.305335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.305352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.315208] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.315269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.315286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.315296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.315304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.315322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.325246] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.325322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.325340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.325350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.325359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.325376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.335211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.335272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.335289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.335298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.335307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.335324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.345264] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.345328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.345345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.345354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.345362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.345383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.355353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.355408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.355424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.355439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.355448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.355465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.365414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.365498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.365515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.365531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.365540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.365557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.375426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.375487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.375504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.375514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.375522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.375540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.385427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.385491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.385509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.385518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.385526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.385543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.395492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.045 [2024-06-10 12:18:07.395564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.045 [2024-06-10 12:18:07.395581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.045 [2024-06-10 12:18:07.395590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.045 [2024-06-10 12:18:07.395598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.045 [2024-06-10 12:18:07.395616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.045 qpair failed and we were unable to recover it. 00:29:18.045 [2024-06-10 12:18:07.405480] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.405543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.405560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.405570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.405578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.405596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.415488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.415574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.415591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.415600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.415609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.415626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.425626] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.425717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.425733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.425743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.425751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.425768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.435608] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.435668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.435685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.435695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.435703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.435721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.445609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.445666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.445686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.445695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.445704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.445721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.455588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.455651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.455668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.455677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.455685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.455703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.465681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.465740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.465756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.465766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.465774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.465792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.475745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.475813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.475830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.475840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.475848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.475865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.485737] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.485796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.485813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.485823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.485831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.485851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.495689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.495767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.495784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.495793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.495801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.495818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.505825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.505891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.505907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.505917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.505925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.505943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.515815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.515872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.515889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.515898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.515907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.515924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.525841] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.525904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.525921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.525931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.525939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.046 [2024-06-10 12:18:07.525956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.046 qpair failed and we were unable to recover it. 00:29:18.046 [2024-06-10 12:18:07.535865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.046 [2024-06-10 12:18:07.535926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.046 [2024-06-10 12:18:07.535946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.046 [2024-06-10 12:18:07.535955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.046 [2024-06-10 12:18:07.535963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.047 [2024-06-10 12:18:07.535981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.047 qpair failed and we were unable to recover it. 00:29:18.047 [2024-06-10 12:18:07.545935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.047 [2024-06-10 12:18:07.545999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.047 [2024-06-10 12:18:07.546016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.047 [2024-06-10 12:18:07.546025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.047 [2024-06-10 12:18:07.546034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.047 [2024-06-10 12:18:07.546051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.047 qpair failed and we were unable to recover it. 00:29:18.047 [2024-06-10 12:18:07.555956] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.047 [2024-06-10 12:18:07.556038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.047 [2024-06-10 12:18:07.556055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.047 [2024-06-10 12:18:07.556064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.047 [2024-06-10 12:18:07.556072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.047 [2024-06-10 12:18:07.556090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.047 qpair failed and we were unable to recover it. 00:29:18.306 [2024-06-10 12:18:07.565960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.306 [2024-06-10 12:18:07.566016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.306 [2024-06-10 12:18:07.566033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.306 [2024-06-10 12:18:07.566042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.306 [2024-06-10 12:18:07.566051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.306 [2024-06-10 12:18:07.566069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.306 qpair failed and we were unable to recover it. 00:29:18.306 [2024-06-10 12:18:07.576029] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.306 [2024-06-10 12:18:07.576097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.306 [2024-06-10 12:18:07.576115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.306 [2024-06-10 12:18:07.576124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.306 [2024-06-10 12:18:07.576132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.306 [2024-06-10 12:18:07.576152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.306 qpair failed and we were unable to recover it. 00:29:18.306 [2024-06-10 12:18:07.585937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.306 [2024-06-10 12:18:07.586124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.306 [2024-06-10 12:18:07.586143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.306 [2024-06-10 12:18:07.586152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.586161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.586179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.596074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.596162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.596179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.596188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.596197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.596214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.606072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.606132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.606148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.606157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.606166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.606183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.616113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.616191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.616208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.616217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.616226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.616243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.626146] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.626205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.626225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.626235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.626244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.626261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.636126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.636194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.636211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.636220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.636229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.636246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.646226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.646302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.646319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.646329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.646337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.646355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.656215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.656298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.656315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.656324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.656333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.656350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.666232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.666291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.666308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.666318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.666329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.666349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.676239] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.676298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.676315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.676324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.676333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.676350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.686341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.686449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.686465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.686475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.686488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.686506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.696273] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.696337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.696354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.696363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.696372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.696389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.706347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.706402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.706418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.706428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.706436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.706453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.716379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.307 [2024-06-10 12:18:07.716443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.307 [2024-06-10 12:18:07.716460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.307 [2024-06-10 12:18:07.716469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.307 [2024-06-10 12:18:07.716482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.307 [2024-06-10 12:18:07.716499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.307 qpair failed and we were unable to recover it. 00:29:18.307 [2024-06-10 12:18:07.726406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.726464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.726486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.726495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.726504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.726521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.736457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.736523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.736540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.736549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.736558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.736576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.746519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.746618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.746635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.746644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.746653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.746671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.756472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.756543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.756560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.756573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.756581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.756599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.766561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.766621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.766638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.766648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.766656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.766673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.776595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.776657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.776674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.776684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.776692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.776709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.786578] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.786634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.786650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.786660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.786669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.786686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.796607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.308 [2024-06-10 12:18:07.796665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.308 [2024-06-10 12:18:07.796682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.308 [2024-06-10 12:18:07.796692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.308 [2024-06-10 12:18:07.796700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5044000b90 00:29:18.308 [2024-06-10 12:18:07.796717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.308 qpair failed and we were unable to recover it. 00:29:18.308 [2024-06-10 12:18:07.796744] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:18.308 A controller has encountered a failure and is being reset. 00:29:18.308 Controller properly reset. 00:29:22.554 Initializing NVMe Controllers 00:29:22.554 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:22.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:22.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:22.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:22.554 Initialization complete. Launching workers. 00:29:22.554 Starting thread on core 1 00:29:22.554 Starting thread on core 2 00:29:22.554 Starting thread on core 3 00:29:22.554 Starting thread on core 0 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:22.554 00:29:22.554 real 0m11.259s 00:29:22.554 user 0m30.644s 00:29:22.554 sys 0m5.679s 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.554 ************************************ 00:29:22.554 END TEST nvmf_target_disconnect_tc2 00:29:22.554 ************************************ 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:22.554 rmmod nvme_tcp 00:29:22.554 rmmod nvme_fabrics 00:29:22.554 rmmod nvme_keyring 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2385338 ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2385338 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2385338 ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 2385338 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2385338 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2385338' 00:29:22.554 killing process with pid 2385338 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 2385338 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 2385338 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.554 12:18:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.455 12:18:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:24.455 00:29:24.455 real 0m20.345s 00:29:24.455 user 0m57.272s 00:29:24.455 sys 0m11.371s 00:29:24.455 12:18:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:24.455 12:18:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:24.455 ************************************ 00:29:24.455 END TEST nvmf_target_disconnect 00:29:24.455 ************************************ 00:29:24.455 12:18:13 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:24.455 12:18:13 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:24.455 12:18:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.455 12:18:13 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:24.455 00:29:24.455 real 22m9.166s 00:29:24.455 user 45m54.421s 00:29:24.455 sys 8m5.831s 00:29:24.456 12:18:13 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:24.456 12:18:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.456 ************************************ 00:29:24.456 END TEST nvmf_tcp 00:29:24.456 ************************************ 00:29:24.715 12:18:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:24.715 12:18:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:24.715 12:18:13 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:24.715 12:18:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:24.715 12:18:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.715 ************************************ 00:29:24.715 START TEST spdkcli_nvmf_tcp 00:29:24.715 ************************************ 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:24.715 * Looking for test storage... 00:29:24.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2387109 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2387109 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 2387109 ']' 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:24.715 12:18:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.716 [2024-06-10 12:18:14.206501] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:24.716 [2024-06-10 12:18:14.206554] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387109 ] 00:29:24.974 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.974 [2024-06-10 12:18:14.276036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:24.974 [2024-06-10 12:18:14.350573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.974 [2024-06-10 12:18:14.350578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:25.540 12:18:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:25.797 12:18:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:25.797 12:18:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:25.797 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:25.797 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:25.797 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:25.797 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:25.797 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:25.797 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:25.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:25.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:25.797 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:25.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:25.797 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:25.797 ' 00:29:28.323 [2024-06-10 12:18:17.465820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.255 [2024-06-10 12:18:18.669824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:31.781 [2024-06-10 12:18:20.888712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:33.685 [2024-06-10 12:18:22.798593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:35.057 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:35.057 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:35.057 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:35.057 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:35.057 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:35.057 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:35.057 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:35.057 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:35.057 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:35.057 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:35.057 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:35.057 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:35.057 12:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:35.316 12:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:35.316 12:18:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:35.316 12:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:35.316 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:35.316 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.574 12:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:35.574 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:35.574 12:18:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.574 12:18:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:35.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:35.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:35.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:35.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:35.574 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:35.574 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:35.574 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:35.574 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:35.574 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:35.574 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:35.574 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:35.574 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:35.574 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:35.574 ' 00:29:40.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:40.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:40.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:40.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:40.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:40.861 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:40.861 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:40.861 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:40.861 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:40.861 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:40.861 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:40.861 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:40.861 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:40.861 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2387109 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2387109 ']' 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2387109 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2387109 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2387109' 00:29:40.861 killing process with pid 2387109 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 2387109 00:29:40.861 12:18:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 2387109 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2387109 ']' 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2387109 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2387109 ']' 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2387109 00:29:40.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2387109) - No such process 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 2387109 is not found' 00:29:40.861 Process with pid 2387109 is not found 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:40.861 00:29:40.861 real 0m16.039s 00:29:40.861 user 0m33.283s 00:29:40.861 sys 0m0.887s 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:40.861 12:18:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.861 ************************************ 00:29:40.861 END TEST spdkcli_nvmf_tcp 00:29:40.861 ************************************ 00:29:40.861 12:18:30 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:40.861 12:18:30 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:40.861 12:18:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:40.861 12:18:30 -- common/autotest_common.sh@10 -- # set +x 00:29:40.861 ************************************ 00:29:40.861 START TEST nvmf_identify_passthru 00:29:40.861 ************************************ 00:29:40.861 12:18:30 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:40.861 * Looking for test storage... 00:29:40.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:40.861 12:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.861 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:40.861 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.861 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.861 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.862 12:18:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.862 12:18:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.862 12:18:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:40.862 12:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.862 12:18:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.862 12:18:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.862 12:18:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:40.862 12:18:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.862 12:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.862 12:18:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:40.862 12:18:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:40.862 12:18:30 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:40.862 12:18:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.428 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.428 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.428 12:18:36 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:47.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:29:47.687 00:29:47.687 --- 10.0.0.2 ping statistics --- 00:29:47.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.687 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:47.687 00:29:47.687 --- 10.0.0.1 ping statistics --- 00:29:47.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.687 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:47.687 12:18:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:47.687 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.687 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:29:47.687 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:47.688 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:47.688 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:29:47.946 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:29:47.946 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:29:47.946 12:18:37 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:d8:00.0 00:29:47.946 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:29:47.946 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:29:47.946 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:47.946 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:47.946 12:18:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:47.946 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.213 12:18:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:29:53.213 12:18:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:53.213 12:18:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:53.213 12:18:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:53.213 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2394655 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.398 12:18:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2394655 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 2394655 ']' 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.398 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:57.399 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.399 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:57.399 12:18:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.399 [2024-06-10 12:18:46.772021] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:29:57.399 [2024-06-10 12:18:46.772073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.399 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.399 [2024-06-10 12:18:46.845533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.657 [2024-06-10 12:18:46.919816] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.657 [2024-06-10 12:18:46.919855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.657 [2024-06-10 12:18:46.919865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.657 [2024-06-10 12:18:46.919873] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.657 [2024-06-10 12:18:46.919881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.657 [2024-06-10 12:18:46.920126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.657 [2024-06-10 12:18:46.920202] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.657 [2024-06-10 12:18:46.920279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.657 [2024-06-10 12:18:46.920280] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:29:58.226 12:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.226 INFO: Log level set to 20 00:29:58.226 INFO: Requests: 00:29:58.226 { 00:29:58.226 "jsonrpc": "2.0", 00:29:58.226 "method": "nvmf_set_config", 00:29:58.226 "id": 1, 00:29:58.226 "params": { 00:29:58.226 "admin_cmd_passthru": { 00:29:58.226 "identify_ctrlr": true 00:29:58.226 } 00:29:58.226 } 00:29:58.226 } 00:29:58.226 00:29:58.226 INFO: response: 00:29:58.226 { 00:29:58.226 "jsonrpc": "2.0", 00:29:58.226 "id": 1, 00:29:58.226 "result": true 00:29:58.226 } 00:29:58.226 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.226 12:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.226 INFO: Setting log level to 20 00:29:58.226 INFO: Setting log level to 20 00:29:58.226 INFO: Log level set to 20 00:29:58.226 INFO: Log level set to 20 00:29:58.226 INFO: Requests: 00:29:58.226 { 00:29:58.226 "jsonrpc": "2.0", 00:29:58.226 "method": "framework_start_init", 00:29:58.226 "id": 1 00:29:58.226 } 00:29:58.226 00:29:58.226 INFO: Requests: 00:29:58.226 { 00:29:58.226 "jsonrpc": "2.0", 00:29:58.226 "method": "framework_start_init", 00:29:58.226 "id": 1 00:29:58.226 } 00:29:58.226 00:29:58.226 [2024-06-10 12:18:47.670411] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:58.226 INFO: response: 00:29:58.226 { 00:29:58.226 "jsonrpc": "2.0", 00:29:58.226 "id": 1, 00:29:58.226 "result": true 00:29:58.226 } 00:29:58.226 00:29:58.226 INFO: response: 00:29:58.226 { 00:29:58.226 "jsonrpc": "2.0", 00:29:58.226 "id": 1, 00:29:58.226 "result": true 00:29:58.226 } 00:29:58.226 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.226 12:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.226 INFO: Setting log level to 40 00:29:58.226 INFO: Setting log level to 40 00:29:58.226 INFO: Setting log level to 40 00:29:58.226 [2024-06-10 12:18:47.683843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.226 12:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.226 12:18:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.226 12:18:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.515 Nvme0n1 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.515 [2024-06-10 12:18:50.615781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.515 [ 00:30:01.515 { 00:30:01.515 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:01.515 "subtype": "Discovery", 00:30:01.515 "listen_addresses": [], 00:30:01.515 "allow_any_host": true, 00:30:01.515 "hosts": [] 00:30:01.515 }, 00:30:01.515 { 00:30:01.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.515 "subtype": "NVMe", 00:30:01.515 "listen_addresses": [ 00:30:01.515 { 00:30:01.515 "trtype": "TCP", 00:30:01.515 "adrfam": "IPv4", 00:30:01.515 "traddr": "10.0.0.2", 00:30:01.515 "trsvcid": "4420" 00:30:01.515 } 00:30:01.515 ], 00:30:01.515 "allow_any_host": true, 00:30:01.515 "hosts": [], 00:30:01.515 "serial_number": "SPDK00000000000001", 00:30:01.515 "model_number": "SPDK bdev Controller", 00:30:01.515 "max_namespaces": 1, 00:30:01.515 "min_cntlid": 1, 00:30:01.515 "max_cntlid": 65519, 00:30:01.515 "namespaces": [ 00:30:01.515 { 00:30:01.515 "nsid": 1, 00:30:01.515 "bdev_name": "Nvme0n1", 00:30:01.515 "name": "Nvme0n1", 00:30:01.515 "nguid": "DF9A120B2502414FBAF327F349015187", 00:30:01.515 "uuid": "df9a120b-2502-414f-baf3-27f349015187" 00:30:01.515 } 00:30:01.515 ] 00:30:01.515 } 00:30:01.515 ] 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:01.515 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:01.515 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.515 12:18:50 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:01.515 12:18:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:01.515 12:18:50 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:01.515 12:18:50 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:01.515 12:18:50 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:01.515 12:18:50 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:01.515 12:18:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:01.515 12:18:50 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:01.515 rmmod nvme_tcp 00:30:01.515 rmmod nvme_fabrics 00:30:01.515 rmmod nvme_keyring 00:30:01.515 12:18:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:01.515 12:18:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:01.515 12:18:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:01.515 12:18:51 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2394655 ']' 00:30:01.515 12:18:51 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2394655 00:30:01.515 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 2394655 ']' 00:30:01.515 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 2394655 00:30:01.515 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:30:01.515 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:01.773 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2394655 00:30:01.773 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:01.773 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:01.773 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2394655' 00:30:01.773 killing process with pid 2394655 00:30:01.773 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 2394655 00:30:01.773 12:18:51 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 2394655 00:30:03.678 12:18:53 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:03.678 12:18:53 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:03.678 12:18:53 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:03.678 12:18:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.678 12:18:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:03.678 12:18:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.678 12:18:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:03.678 12:18:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.216 12:18:55 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.216 00:30:06.216 real 0m25.085s 00:30:06.216 user 0m33.285s 00:30:06.216 sys 0m6.588s 00:30:06.216 12:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:06.216 12:18:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.216 ************************************ 00:30:06.216 END TEST nvmf_identify_passthru 00:30:06.216 ************************************ 00:30:06.216 12:18:55 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:06.216 12:18:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:06.216 12:18:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:06.216 12:18:55 -- common/autotest_common.sh@10 -- # set +x 00:30:06.216 ************************************ 00:30:06.216 START TEST nvmf_dif 00:30:06.216 ************************************ 00:30:06.216 12:18:55 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:06.216 * Looking for test storage... 00:30:06.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.216 12:18:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.216 12:18:55 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.216 12:18:55 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.216 12:18:55 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.216 12:18:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.216 12:18:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.216 12:18:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.216 12:18:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:06.216 12:18:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.216 12:18:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:06.216 12:18:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:06.216 12:18:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:06.216 12:18:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:06.216 12:18:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.216 12:18:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:06.216 12:18:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:06.216 12:18:55 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:06.216 12:18:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:12.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:12.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:12.788 Found net devices under 0000:af:00.0: cvl_0_0 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:12.788 Found net devices under 0000:af:00.1: cvl_0_1 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.788 12:19:01 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:12.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:30:12.788 00:30:12.788 --- 10.0.0.2 ping statistics --- 00:30:12.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.788 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:30:12.788 00:30:12.788 --- 10.0.0.1 ping statistics --- 00:30:12.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.788 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:12.788 12:19:02 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:15.325 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:15.325 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:15.605 12:19:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:15.605 12:19:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2400458 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:15.605 12:19:04 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2400458 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 2400458 ']' 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:15.605 12:19:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.605 [2024-06-10 12:19:04.959023] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:30:15.605 [2024-06-10 12:19:04.959063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.605 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.605 [2024-06-10 12:19:05.030447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.605 [2024-06-10 12:19:05.103362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.605 [2024-06-10 12:19:05.103400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.605 [2024-06-10 12:19:05.103410] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.605 [2024-06-10 12:19:05.103419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.605 [2024-06-10 12:19:05.103442] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.605 [2024-06-10 12:19:05.103463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:30:16.566 12:19:05 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 12:19:05 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.566 12:19:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:16.566 12:19:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 [2024-06-10 12:19:05.817667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.566 12:19:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 ************************************ 00:30:16.566 START TEST fio_dif_1_default 00:30:16.566 ************************************ 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 bdev_null0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:16.566 [2024-06-10 12:19:05.889990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.566 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.567 { 00:30:16.567 "params": { 00:30:16.567 "name": "Nvme$subsystem", 00:30:16.567 "trtype": "$TEST_TRANSPORT", 00:30:16.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.567 "adrfam": "ipv4", 00:30:16.567 "trsvcid": "$NVMF_PORT", 00:30:16.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.567 "hdgst": ${hdgst:-false}, 00:30:16.567 "ddgst": ${ddgst:-false} 00:30:16.567 }, 00:30:16.567 "method": "bdev_nvme_attach_controller" 00:30:16.567 } 00:30:16.567 EOF 00:30:16.567 )") 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:16.567 "params": { 00:30:16.567 "name": "Nvme0", 00:30:16.567 "trtype": "tcp", 00:30:16.567 "traddr": "10.0.0.2", 00:30:16.567 "adrfam": "ipv4", 00:30:16.567 "trsvcid": "4420", 00:30:16.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.567 "hdgst": false, 00:30:16.567 "ddgst": false 00:30:16.567 }, 00:30:16.567 "method": "bdev_nvme_attach_controller" 00:30:16.567 }' 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:16.567 12:19:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.980 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:16.980 fio-3.35 00:30:16.980 Starting 1 thread 00:30:16.980 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.178 00:30:29.178 filename0: (groupid=0, jobs=1): err= 0: pid=2400892: Mon Jun 10 12:19:16 2024 00:30:29.178 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10041msec) 00:30:29.178 slat (nsec): min=5626, max=24980, avg=5883.25, stdev=923.06 00:30:29.178 clat (usec): min=451, max=44797, avg=21033.63, stdev=20500.04 00:30:29.178 lat (usec): min=457, max=44821, avg=21039.51, stdev=20500.01 00:30:29.178 clat percentiles (usec): 00:30:29.178 | 1.00th=[ 457], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 474], 00:30:29.178 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[41157], 60.00th=[41157], 00:30:29.178 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:29.178 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:30:29.178 | 99.99th=[44827] 00:30:29.178 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.60, stdev=16.74, samples=20 00:30:29.178 iops : min= 176, max= 192, avg=190.40, stdev= 4.19, samples=20 00:30:29.178 lat (usec) : 500=37.89%, 750=12.00% 00:30:29.178 lat (msec) : 50=50.10% 00:30:29.178 cpu : usr=85.71%, sys=14.04%, ctx=14, majf=0, minf=211 00:30:29.178 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.178 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.178 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:29.178 00:30:29.178 Run status group 0 (all jobs): 00:30:29.178 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7632KiB (7815kB), run=10041-10041msec 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.178 00:30:29.178 real 0m11.216s 00:30:29.178 user 0m16.602s 00:30:29.178 sys 0m1.741s 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 ************************************ 00:30:29.178 END TEST fio_dif_1_default 00:30:29.178 ************************************ 00:30:29.178 12:19:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:29.178 12:19:17 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:29.178 12:19:17 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 ************************************ 00:30:29.178 START TEST fio_dif_1_multi_subsystems 00:30:29.178 ************************************ 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 bdev_null0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 [2024-06-10 12:19:17.185675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.178 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.179 bdev_null1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.179 { 00:30:29.179 "params": { 00:30:29.179 "name": "Nvme$subsystem", 00:30:29.179 "trtype": "$TEST_TRANSPORT", 00:30:29.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.179 "adrfam": "ipv4", 00:30:29.179 "trsvcid": "$NVMF_PORT", 00:30:29.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.179 "hdgst": ${hdgst:-false}, 00:30:29.179 "ddgst": ${ddgst:-false} 00:30:29.179 }, 00:30:29.179 "method": "bdev_nvme_attach_controller" 00:30:29.179 } 00:30:29.179 EOF 00:30:29.179 )") 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.179 { 00:30:29.179 "params": { 00:30:29.179 "name": "Nvme$subsystem", 00:30:29.179 "trtype": "$TEST_TRANSPORT", 00:30:29.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.179 "adrfam": "ipv4", 00:30:29.179 "trsvcid": "$NVMF_PORT", 00:30:29.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.179 "hdgst": ${hdgst:-false}, 00:30:29.179 "ddgst": ${ddgst:-false} 00:30:29.179 }, 00:30:29.179 "method": "bdev_nvme_attach_controller" 00:30:29.179 } 00:30:29.179 EOF 00:30:29.179 )") 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:29.179 "params": { 00:30:29.179 "name": "Nvme0", 00:30:29.179 "trtype": "tcp", 00:30:29.179 "traddr": "10.0.0.2", 00:30:29.179 "adrfam": "ipv4", 00:30:29.179 "trsvcid": "4420", 00:30:29.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.179 "hdgst": false, 00:30:29.179 "ddgst": false 00:30:29.179 }, 00:30:29.179 "method": "bdev_nvme_attach_controller" 00:30:29.179 },{ 00:30:29.179 "params": { 00:30:29.179 "name": "Nvme1", 00:30:29.179 "trtype": "tcp", 00:30:29.179 "traddr": "10.0.0.2", 00:30:29.179 "adrfam": "ipv4", 00:30:29.179 "trsvcid": "4420", 00:30:29.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:29.179 "hdgst": false, 00:30:29.179 "ddgst": false 00:30:29.179 }, 00:30:29.179 "method": "bdev_nvme_attach_controller" 00:30:29.179 }' 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:29.179 12:19:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.179 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:29.179 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:29.179 fio-3.35 00:30:29.179 Starting 2 threads 00:30:29.179 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.145 00:30:39.145 filename0: (groupid=0, jobs=1): err= 0: pid=2402904: Mon Jun 10 12:19:28 2024 00:30:39.145 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10035msec) 00:30:39.145 slat (nsec): min=5783, max=28489, avg=6815.82, stdev=1958.83 00:30:39.145 clat (usec): min=385, max=42522, avg=21239.14, stdev=20480.05 00:30:39.145 lat (usec): min=391, max=42528, avg=21245.95, stdev=20479.44 00:30:39.145 clat percentiles (usec): 00:30:39.145 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 486], 00:30:39.145 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[40633], 60.00th=[41157], 00:30:39.145 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:39.145 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:39.145 | 99.99th=[42730] 00:30:39.145 bw ( KiB/s): min= 672, max= 768, per=49.80%, avg=753.60, stdev=30.22, samples=20 00:30:39.145 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:30:39.145 lat (usec) : 500=31.89%, 750=17.27%, 1000=0.21% 00:30:39.145 lat (msec) : 50=50.64% 00:30:39.145 cpu : usr=93.69%, sys=6.05%, ctx=11, majf=0, minf=164 00:30:39.145 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.145 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.145 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:39.145 filename1: (groupid=0, jobs=1): err= 0: pid=2402905: Mon Jun 10 12:19:28 2024 00:30:39.145 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10042msec) 00:30:39.145 slat (nsec): min=5775, max=34333, avg=6819.88, stdev=2057.32 00:30:39.145 clat (usec): min=462, max=42544, avg=21031.16, stdev=20494.13 00:30:39.145 lat (usec): min=468, max=42550, avg=21037.98, stdev=20493.48 00:30:39.145 clat percentiles (usec): 00:30:39.145 | 1.00th=[ 469], 5.00th=[ 474], 10.00th=[ 478], 20.00th=[ 486], 00:30:39.145 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[41157], 60.00th=[41157], 00:30:39.146 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:39.146 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:39.146 | 99.99th=[42730] 00:30:39.146 bw ( KiB/s): min= 704, max= 768, per=50.33%, avg=761.60, stdev=19.70, samples=20 00:30:39.146 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:30:39.146 lat (usec) : 500=42.71%, 750=7.08%, 1000=0.10% 00:30:39.146 lat (msec) : 50=50.10% 00:30:39.146 cpu : usr=93.57%, sys=6.17%, ctx=15, majf=0, minf=32 00:30:39.146 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.146 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.146 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:39.146 00:30:39.146 Run status group 0 (all jobs): 00:30:39.146 READ: bw=1512KiB/s (1548kB/s), 753KiB/s-760KiB/s (771kB/s-778kB/s), io=14.8MiB (15.5MB), run=10035-10042msec 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.146 00:30:39.146 real 0m11.487s 00:30:39.146 user 0m27.928s 00:30:39.146 sys 0m1.592s 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:39.146 12:19:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.146 ************************************ 00:30:39.146 END TEST fio_dif_1_multi_subsystems 00:30:39.146 ************************************ 00:30:39.405 12:19:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:39.405 12:19:28 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:39.405 12:19:28 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:39.405 12:19:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.405 ************************************ 00:30:39.405 START TEST fio_dif_rand_params 00:30:39.405 ************************************ 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.405 bdev_null0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.405 [2024-06-10 12:19:28.751263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:39.405 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:39.406 { 00:30:39.406 "params": { 00:30:39.406 "name": "Nvme$subsystem", 00:30:39.406 "trtype": "$TEST_TRANSPORT", 00:30:39.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.406 "adrfam": "ipv4", 00:30:39.406 "trsvcid": "$NVMF_PORT", 00:30:39.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.406 "hdgst": ${hdgst:-false}, 00:30:39.406 "ddgst": ${ddgst:-false} 00:30:39.406 }, 00:30:39.406 "method": "bdev_nvme_attach_controller" 00:30:39.406 } 00:30:39.406 EOF 00:30:39.406 )") 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:39.406 "params": { 00:30:39.406 "name": "Nvme0", 00:30:39.406 "trtype": "tcp", 00:30:39.406 "traddr": "10.0.0.2", 00:30:39.406 "adrfam": "ipv4", 00:30:39.406 "trsvcid": "4420", 00:30:39.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.406 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:39.406 "hdgst": false, 00:30:39.406 "ddgst": false 00:30:39.406 }, 00:30:39.406 "method": "bdev_nvme_attach_controller" 00:30:39.406 }' 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:39.406 12:19:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.664 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:39.664 ... 00:30:39.664 fio-3.35 00:30:39.664 Starting 3 threads 00:30:39.664 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.313 00:30:46.313 filename0: (groupid=0, jobs=1): err= 0: pid=2404914: Mon Jun 10 12:19:34 2024 00:30:46.313 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(173MiB/5047msec) 00:30:46.313 slat (nsec): min=3931, max=25002, avg=9431.67, stdev=2449.59 00:30:46.313 clat (usec): min=3414, max=89735, avg=10910.25, stdev=12009.35 00:30:46.313 lat (usec): min=3421, max=89746, avg=10919.68, stdev=12009.48 00:30:46.313 clat percentiles (usec): 00:30:46.313 | 1.00th=[ 4047], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5866], 00:30:46.313 | 30.00th=[ 6194], 40.00th=[ 6587], 50.00th=[ 7177], 60.00th=[ 7832], 00:30:46.313 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[11469], 95.00th=[47449], 00:30:46.313 | 99.00th=[49021], 99.50th=[50070], 99.90th=[87557], 99.95th=[89654], 00:30:46.313 | 99.99th=[89654] 00:30:46.313 bw ( KiB/s): min=19968, max=41728, per=31.39%, avg=35328.00, stdev=6174.74, samples=10 00:30:46.313 iops : min= 156, max= 326, avg=276.00, stdev=48.24, samples=10 00:30:46.313 lat (msec) : 4=0.72%, 10=82.56%, 20=7.60%, 50=8.68%, 100=0.43% 00:30:46.313 cpu : usr=91.70%, sys=8.01%, ctx=10, majf=0, minf=80 00:30:46.313 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.313 issued rwts: total=1382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.313 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:46.313 filename0: (groupid=0, jobs=1): err= 0: pid=2404915: Mon Jun 10 12:19:34 2024 00:30:46.313 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5005msec) 00:30:46.313 slat (nsec): min=5992, max=26761, avg=9462.05, stdev=2555.99 00:30:46.313 clat (usec): min=3446, max=89729, avg=11141.92, stdev=12408.70 00:30:46.313 lat (usec): min=3452, max=89740, avg=11151.38, stdev=12408.85 00:30:46.314 clat percentiles (usec): 00:30:46.314 | 1.00th=[ 3687], 5.00th=[ 3982], 10.00th=[ 4686], 20.00th=[ 5800], 00:30:46.314 | 30.00th=[ 6194], 40.00th=[ 6587], 50.00th=[ 7373], 60.00th=[ 8094], 00:30:46.314 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[11994], 95.00th=[47449], 00:30:46.314 | 99.00th=[50070], 99.50th=[50070], 99.90th=[89654], 99.95th=[89654], 00:30:46.314 | 99.99th=[89654] 00:30:46.314 bw ( KiB/s): min=23040, max=39168, per=30.55%, avg=34380.80, stdev=5827.77, samples=10 00:30:46.314 iops : min= 180, max= 306, avg=268.60, stdev=45.53, samples=10 00:30:46.314 lat (msec) : 4=5.50%, 10=75.71%, 20=9.44%, 50=8.10%, 100=1.26% 00:30:46.314 cpu : usr=91.43%, sys=8.25%, ctx=13, majf=0, minf=96 00:30:46.314 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.314 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:46.314 filename0: (groupid=0, jobs=1): err= 0: pid=2404916: Mon Jun 10 12:19:34 2024 00:30:46.314 read: IOPS=341, BW=42.7MiB/s (44.8MB/s)(214MiB/5003msec) 00:30:46.314 slat (nsec): min=5938, max=26622, avg=9403.77, stdev=2536.67 00:30:46.314 clat (usec): min=2934, max=88283, avg=8770.62, stdev=8857.42 00:30:46.314 lat (usec): min=2941, max=88291, avg=8780.02, stdev=8857.55 00:30:46.314 clat percentiles (usec): 00:30:46.314 | 1.00th=[ 3621], 5.00th=[ 3884], 10.00th=[ 4113], 20.00th=[ 4621], 00:30:46.314 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7373], 00:30:46.314 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[12780], 00:30:46.314 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[88605], 00:30:46.314 | 99.99th=[88605] 00:30:46.314 bw ( KiB/s): min=34304, max=55808, per=38.83%, avg=43699.20, stdev=6400.63, samples=10 00:30:46.314 iops : min= 268, max= 436, avg=341.40, stdev=50.00, samples=10 00:30:46.314 lat (msec) : 4=7.43%, 10=79.46%, 20=8.60%, 50=3.98%, 100=0.53% 00:30:46.314 cpu : usr=90.72%, sys=8.98%, ctx=7, majf=0, minf=75 00:30:46.314 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.314 issued rwts: total=1709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:46.314 00:30:46.314 Run status group 0 (all jobs): 00:30:46.314 READ: bw=110MiB/s (115MB/s), 33.6MiB/s-42.7MiB/s (35.2MB/s-44.8MB/s), io=555MiB (582MB), run=5003-5047msec 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 bdev_null0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 [2024-06-10 12:19:34.970641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 bdev_null1 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 bdev_null2 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.314 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.315 { 00:30:46.315 "params": { 00:30:46.315 "name": "Nvme$subsystem", 00:30:46.315 "trtype": "$TEST_TRANSPORT", 00:30:46.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.315 "adrfam": "ipv4", 00:30:46.315 "trsvcid": "$NVMF_PORT", 00:30:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.315 "hdgst": ${hdgst:-false}, 00:30:46.315 "ddgst": ${ddgst:-false} 00:30:46.315 }, 00:30:46.315 "method": "bdev_nvme_attach_controller" 00:30:46.315 } 00:30:46.315 EOF 00:30:46.315 )") 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.315 { 00:30:46.315 "params": { 00:30:46.315 "name": "Nvme$subsystem", 00:30:46.315 "trtype": "$TEST_TRANSPORT", 00:30:46.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.315 "adrfam": "ipv4", 00:30:46.315 "trsvcid": "$NVMF_PORT", 00:30:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.315 "hdgst": ${hdgst:-false}, 00:30:46.315 "ddgst": ${ddgst:-false} 00:30:46.315 }, 00:30:46.315 "method": "bdev_nvme_attach_controller" 00:30:46.315 } 00:30:46.315 EOF 00:30:46.315 )") 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.315 { 00:30:46.315 "params": { 00:30:46.315 "name": "Nvme$subsystem", 00:30:46.315 "trtype": "$TEST_TRANSPORT", 00:30:46.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.315 "adrfam": "ipv4", 00:30:46.315 "trsvcid": "$NVMF_PORT", 00:30:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.315 "hdgst": ${hdgst:-false}, 00:30:46.315 "ddgst": ${ddgst:-false} 00:30:46.315 }, 00:30:46.315 "method": "bdev_nvme_attach_controller" 00:30:46.315 } 00:30:46.315 EOF 00:30:46.315 )") 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.315 "params": { 00:30:46.315 "name": "Nvme0", 00:30:46.315 "trtype": "tcp", 00:30:46.315 "traddr": "10.0.0.2", 00:30:46.315 "adrfam": "ipv4", 00:30:46.315 "trsvcid": "4420", 00:30:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.315 "hdgst": false, 00:30:46.315 "ddgst": false 00:30:46.315 }, 00:30:46.315 "method": "bdev_nvme_attach_controller" 00:30:46.315 },{ 00:30:46.315 "params": { 00:30:46.315 "name": "Nvme1", 00:30:46.315 "trtype": "tcp", 00:30:46.315 "traddr": "10.0.0.2", 00:30:46.315 "adrfam": "ipv4", 00:30:46.315 "trsvcid": "4420", 00:30:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.315 "hdgst": false, 00:30:46.315 "ddgst": false 00:30:46.315 }, 00:30:46.315 "method": "bdev_nvme_attach_controller" 00:30:46.315 },{ 00:30:46.315 "params": { 00:30:46.315 "name": "Nvme2", 00:30:46.315 "trtype": "tcp", 00:30:46.315 "traddr": "10.0.0.2", 00:30:46.315 "adrfam": "ipv4", 00:30:46.315 "trsvcid": "4420", 00:30:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:46.315 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:46.315 "hdgst": false, 00:30:46.315 "ddgst": false 00:30:46.315 }, 00:30:46.315 "method": "bdev_nvme_attach_controller" 00:30:46.315 }' 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:46.315 12:19:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.315 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:46.315 ... 00:30:46.315 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:46.315 ... 00:30:46.315 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:46.315 ... 00:30:46.315 fio-3.35 00:30:46.315 Starting 24 threads 00:30:46.315 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.518 00:30:58.518 filename0: (groupid=0, jobs=1): err= 0: pid=2406132: Mon Jun 10 12:19:46 2024 00:30:58.518 read: IOPS=661, BW=2646KiB/s (2709kB/s)(25.9MiB/10015msec) 00:30:58.518 slat (usec): min=6, max=118, avg=27.69, stdev=13.76 00:30:58.518 clat (usec): min=2078, max=29781, avg=23976.03, stdev=1963.96 00:30:58.518 lat (usec): min=2098, max=29819, avg=24003.72, stdev=1964.52 00:30:58.518 clat percentiles (usec): 00:30:58.518 | 1.00th=[16319], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.518 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.518 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.518 | 99.00th=[26084], 99.50th=[27919], 99.90th=[29754], 99.95th=[29754], 00:30:58.518 | 99.99th=[29754] 00:30:58.518 bw ( KiB/s): min= 2554, max= 3072, per=4.19%, avg=2642.00, stdev=120.03, samples=20 00:30:58.518 iops : min= 638, max= 768, avg=660.40, stdev=30.06, samples=20 00:30:58.518 lat (msec) : 4=0.24%, 10=0.72%, 20=0.24%, 50=98.79% 00:30:58.518 cpu : usr=97.88%, sys=1.73%, ctx=28, majf=0, minf=61 00:30:58.518 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.518 filename0: (groupid=0, jobs=1): err= 0: pid=2406133: Mon Jun 10 12:19:46 2024 00:30:58.518 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10006msec) 00:30:58.518 slat (usec): min=6, max=156, avg=28.31, stdev=13.90 00:30:58.518 clat (usec): min=1861, max=29796, avg=24062.89, stdev=1438.36 00:30:58.518 lat (usec): min=1874, max=29831, avg=24091.19, stdev=1438.47 00:30:58.518 clat percentiles (usec): 00:30:58.518 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.518 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.518 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.518 | 99.00th=[26346], 99.50th=[27919], 99.90th=[29754], 99.95th=[29754], 00:30:58.518 | 99.99th=[29754] 00:30:58.518 bw ( KiB/s): min= 2432, max= 2821, per=4.18%, avg=2634.37, stdev=89.21, samples=19 00:30:58.518 iops : min= 608, max= 705, avg=658.58, stdev=22.27, samples=19 00:30:58.518 lat (msec) : 2=0.03%, 10=0.42%, 20=0.27%, 50=99.27% 00:30:58.518 cpu : usr=97.44%, sys=2.16%, ctx=31, majf=0, minf=47 00:30:58.518 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.518 filename0: (groupid=0, jobs=1): err= 0: pid=2406134: Mon Jun 10 12:19:46 2024 00:30:58.518 read: IOPS=656, BW=2628KiB/s (2691kB/s)(25.7MiB/10010msec) 00:30:58.518 slat (nsec): min=6127, max=70404, avg=24879.10, stdev=13077.27 00:30:58.518 clat (usec): min=9954, max=33621, avg=24150.34, stdev=1081.05 00:30:58.518 lat (usec): min=9968, max=33647, avg=24175.22, stdev=1079.69 00:30:58.518 clat percentiles (usec): 00:30:58.518 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.518 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.518 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.518 | 99.00th=[26346], 99.50th=[28181], 99.90th=[33424], 99.95th=[33817], 00:30:58.518 | 99.99th=[33817] 00:30:58.518 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2627.37, stdev=65.66, samples=19 00:30:58.518 iops : min= 640, max= 672, avg=656.84, stdev=16.42, samples=19 00:30:58.518 lat (msec) : 10=0.05%, 20=0.23%, 50=99.73% 00:30:58.518 cpu : usr=97.63%, sys=1.82%, ctx=72, majf=0, minf=50 00:30:58.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.518 filename0: (groupid=0, jobs=1): err= 0: pid=2406135: Mon Jun 10 12:19:46 2024 00:30:58.518 read: IOPS=655, BW=2624KiB/s (2687kB/s)(25.6MiB/10001msec) 00:30:58.518 slat (nsec): min=4843, max=91669, avg=40982.67, stdev=15475.09 00:30:58.518 clat (usec): min=10099, max=46131, avg=24039.54, stdev=1461.63 00:30:58.518 lat (usec): min=10129, max=46146, avg=24080.52, stdev=1461.05 00:30:58.518 clat percentiles (usec): 00:30:58.518 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.518 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:30:58.518 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25297], 00:30:58.518 | 99.00th=[26084], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:30:58.518 | 99.99th=[45876] 00:30:58.518 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2619.68, stdev=88.93, samples=19 00:30:58.518 iops : min= 608, max= 672, avg=654.84, stdev=22.21, samples=19 00:30:58.518 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.518 cpu : usr=97.83%, sys=1.78%, ctx=17, majf=0, minf=42 00:30:58.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.518 filename0: (groupid=0, jobs=1): err= 0: pid=2406136: Mon Jun 10 12:19:46 2024 00:30:58.518 read: IOPS=655, BW=2621KiB/s (2684kB/s)(25.6MiB/10012msec) 00:30:58.518 slat (nsec): min=6031, max=77653, avg=16153.13, stdev=10537.62 00:30:58.518 clat (usec): min=17383, max=46999, avg=24288.39, stdev=1357.54 00:30:58.518 lat (usec): min=17391, max=47017, avg=24304.54, stdev=1356.92 00:30:58.518 clat percentiles (usec): 00:30:58.518 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:30:58.518 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:30:58.518 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25297], 95.00th=[25822], 00:30:58.518 | 99.00th=[26084], 99.50th=[29754], 99.90th=[46924], 99.95th=[46924], 00:30:58.518 | 99.99th=[46924] 00:30:58.518 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2620.63, stdev=78.31, samples=19 00:30:58.518 iops : min= 608, max= 672, avg=655.16, stdev=19.58, samples=19 00:30:58.518 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.518 cpu : usr=97.34%, sys=2.09%, ctx=33, majf=0, minf=41 00:30:58.518 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.518 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.518 filename0: (groupid=0, jobs=1): err= 0: pid=2406137: Mon Jun 10 12:19:46 2024 00:30:58.518 read: IOPS=661, BW=2646KiB/s (2710kB/s)(25.9MiB/10013msec) 00:30:58.518 slat (nsec): min=6164, max=87281, avg=22801.91, stdev=14141.13 00:30:58.518 clat (usec): min=1673, max=29769, avg=24011.29, stdev=2011.58 00:30:58.518 lat (usec): min=1687, max=29786, avg=24034.09, stdev=2011.46 00:30:58.518 clat percentiles (usec): 00:30:58.518 | 1.00th=[16319], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:30:58.518 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:30:58.518 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.518 | 99.00th=[26346], 99.50th=[27919], 99.90th=[29754], 99.95th=[29754], 00:30:58.518 | 99.99th=[29754] 00:30:58.518 bw ( KiB/s): min= 2554, max= 3078, per=4.19%, avg=2642.30, stdev=121.16, samples=20 00:30:58.518 iops : min= 638, max= 769, avg=660.45, stdev=30.25, samples=20 00:30:58.518 lat (msec) : 2=0.11%, 4=0.14%, 10=0.72%, 20=0.24%, 50=98.79% 00:30:58.518 cpu : usr=97.58%, sys=1.90%, ctx=80, majf=0, minf=73 00:30:58.518 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.518 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename0: (groupid=0, jobs=1): err= 0: pid=2406138: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=655, BW=2622KiB/s (2685kB/s)(25.6MiB/10008msec) 00:30:58.519 slat (nsec): min=5496, max=77436, avg=30275.72, stdev=13723.20 00:30:58.519 clat (usec): min=17139, max=43566, avg=24146.54, stdev=1221.84 00:30:58.519 lat (usec): min=17163, max=43582, avg=24176.82, stdev=1222.01 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.519 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.519 | 99.00th=[26084], 99.50th=[30016], 99.90th=[43779], 99.95th=[43779], 00:30:58.519 | 99.99th=[43779] 00:30:58.519 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2620.63, stdev=65.66, samples=19 00:30:58.519 iops : min= 640, max= 672, avg=655.16, stdev=16.42, samples=19 00:30:58.519 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.519 cpu : usr=97.46%, sys=1.82%, ctx=85, majf=0, minf=37 00:30:58.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename0: (groupid=0, jobs=1): err= 0: pid=2406139: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=657, BW=2628KiB/s (2691kB/s)(25.7MiB/10008msec) 00:30:58.519 slat (nsec): min=4575, max=79903, avg=32360.43, stdev=16294.52 00:30:58.519 clat (usec): min=9715, max=43475, avg=24044.11, stdev=1182.09 00:30:58.519 lat (usec): min=9728, max=43489, avg=24076.47, stdev=1183.94 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.519 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.519 | 99.00th=[26084], 99.50th=[29754], 99.90th=[33424], 99.95th=[33424], 00:30:58.519 | 99.99th=[43254] 00:30:58.519 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2621.16, stdev=88.81, samples=19 00:30:58.519 iops : min= 608, max= 672, avg=655.26, stdev=22.22, samples=19 00:30:58.519 lat (msec) : 10=0.24%, 20=0.27%, 50=99.48% 00:30:58.519 cpu : usr=97.85%, sys=1.67%, ctx=140, majf=0, minf=44 00:30:58.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename1: (groupid=0, jobs=1): err= 0: pid=2406140: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10003msec) 00:30:58.519 slat (nsec): min=5100, max=90052, avg=43625.53, stdev=18234.43 00:30:58.519 clat (usec): min=10599, max=50172, avg=23975.72, stdev=1479.24 00:30:58.519 lat (usec): min=10625, max=50186, avg=24019.35, stdev=1479.47 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:30:58.519 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:30:58.519 | 99.00th=[26084], 99.50th=[29230], 99.90th=[46400], 99.95th=[46400], 00:30:58.519 | 99.99th=[50070] 00:30:58.519 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2619.68, stdev=88.93, samples=19 00:30:58.519 iops : min= 608, max= 672, avg=654.84, stdev=22.21, samples=19 00:30:58.519 lat (msec) : 20=0.27%, 50=99.70%, 100=0.03% 00:30:58.519 cpu : usr=97.19%, sys=1.99%, ctx=236, majf=0, minf=29 00:30:58.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename1: (groupid=0, jobs=1): err= 0: pid=2406141: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=655, BW=2624KiB/s (2687kB/s)(25.6MiB/10001msec) 00:30:58.519 slat (nsec): min=5493, max=79845, avg=32294.47, stdev=16641.33 00:30:58.519 clat (usec): min=17017, max=36578, avg=24080.27, stdev=975.56 00:30:58.519 lat (usec): min=17028, max=36595, avg=24112.57, stdev=977.24 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.519 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.519 | 99.00th=[26084], 99.50th=[29754], 99.90th=[36439], 99.95th=[36439], 00:30:58.519 | 99.99th=[36439] 00:30:58.519 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2620.32, stdev=78.58, samples=19 00:30:58.519 iops : min= 608, max= 672, avg=655.05, stdev=19.67, samples=19 00:30:58.519 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.519 cpu : usr=97.34%, sys=2.12%, ctx=59, majf=0, minf=38 00:30:58.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename1: (groupid=0, jobs=1): err= 0: pid=2406142: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=659, BW=2638KiB/s (2701kB/s)(25.8MiB/10006msec) 00:30:58.519 slat (nsec): min=6978, max=93777, avg=44037.07, stdev=17191.36 00:30:58.519 clat (usec): min=1741, max=29765, avg=23894.11, stdev=1503.40 00:30:58.519 lat (usec): min=1751, max=29807, avg=23938.15, stdev=1505.77 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:30:58.519 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25560], 00:30:58.519 | 99.00th=[26084], 99.50th=[27657], 99.90th=[29492], 99.95th=[29754], 00:30:58.519 | 99.99th=[29754] 00:30:58.519 bw ( KiB/s): min= 2432, max= 2869, per=4.18%, avg=2636.89, stdev=95.27, samples=19 00:30:58.519 iops : min= 608, max= 717, avg=659.21, stdev=23.78, samples=19 00:30:58.519 lat (msec) : 2=0.05%, 10=0.48%, 20=0.47%, 50=99.00% 00:30:58.519 cpu : usr=97.82%, sys=1.78%, ctx=21, majf=0, minf=35 00:30:58.519 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:58.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 issued rwts: total=6598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename1: (groupid=0, jobs=1): err= 0: pid=2406143: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=655, BW=2621KiB/s (2684kB/s)(25.6MiB/10011msec) 00:30:58.519 slat (nsec): min=6822, max=80743, avg=32322.57, stdev=16476.04 00:30:58.519 clat (usec): min=17063, max=46685, avg=24126.08, stdev=1346.66 00:30:58.519 lat (usec): min=17086, max=46706, avg=24158.40, stdev=1346.99 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.519 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.519 | 99.00th=[26084], 99.50th=[29754], 99.90th=[46400], 99.95th=[46400], 00:30:58.519 | 99.99th=[46924] 00:30:58.519 bw ( KiB/s): min= 2436, max= 2688, per=4.16%, avg=2620.84, stdev=77.78, samples=19 00:30:58.519 iops : min= 609, max= 672, avg=655.21, stdev=19.44, samples=19 00:30:58.519 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.519 cpu : usr=97.65%, sys=1.96%, ctx=19, majf=0, minf=33 00:30:58.519 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.519 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.519 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.519 filename1: (groupid=0, jobs=1): err= 0: pid=2406144: Mon Jun 10 12:19:46 2024 00:30:58.519 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10003msec) 00:30:58.519 slat (nsec): min=5097, max=93561, avg=42465.24, stdev=18470.86 00:30:58.519 clat (usec): min=10622, max=46524, avg=23984.44, stdev=1467.48 00:30:58.519 lat (usec): min=10647, max=46540, avg=24026.90, stdev=1467.72 00:30:58.519 clat percentiles (usec): 00:30:58.519 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:30:58.519 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:30:58.519 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:30:58.519 | 99.00th=[26084], 99.50th=[29230], 99.90th=[46400], 99.95th=[46400], 00:30:58.519 | 99.99th=[46400] 00:30:58.520 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2619.68, stdev=88.93, samples=19 00:30:58.520 iops : min= 608, max= 672, avg=654.84, stdev=22.21, samples=19 00:30:58.520 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.520 cpu : usr=97.26%, sys=2.16%, ctx=110, majf=0, minf=22 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename1: (groupid=0, jobs=1): err= 0: pid=2406145: Mon Jun 10 12:19:46 2024 00:30:58.520 read: IOPS=656, BW=2628KiB/s (2691kB/s)(25.7MiB/10010msec) 00:30:58.520 slat (nsec): min=6562, max=75244, avg=28560.53, stdev=13768.33 00:30:58.520 clat (usec): min=9963, max=34851, avg=24087.28, stdev=1082.27 00:30:58.520 lat (usec): min=9987, max=34873, avg=24115.84, stdev=1082.12 00:30:58.520 clat percentiles (usec): 00:30:58.520 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.520 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.520 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.520 | 99.00th=[26346], 99.50th=[28181], 99.90th=[33817], 99.95th=[33817], 00:30:58.520 | 99.99th=[34866] 00:30:58.520 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2627.37, stdev=65.66, samples=19 00:30:58.520 iops : min= 640, max= 672, avg=656.84, stdev=16.42, samples=19 00:30:58.520 lat (msec) : 10=0.03%, 20=0.32%, 50=99.65% 00:30:58.520 cpu : usr=97.86%, sys=1.67%, ctx=60, majf=0, minf=30 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename1: (groupid=0, jobs=1): err= 0: pid=2406146: Mon Jun 10 12:19:46 2024 00:30:58.520 read: IOPS=657, BW=2628KiB/s (2691kB/s)(25.7MiB/10009msec) 00:30:58.520 slat (nsec): min=4280, max=79889, avg=32557.61, stdev=16703.08 00:30:58.520 clat (usec): min=9718, max=44229, avg=24037.86, stdev=1200.03 00:30:58.520 lat (usec): min=9732, max=44241, avg=24070.42, stdev=1202.03 00:30:58.520 clat percentiles (usec): 00:30:58.520 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.520 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.520 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.520 | 99.00th=[26084], 99.50th=[29754], 99.90th=[34341], 99.95th=[34341], 00:30:58.520 | 99.99th=[44303] 00:30:58.520 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2620.89, stdev=88.99, samples=19 00:30:58.520 iops : min= 608, max= 672, avg=655.21, stdev=22.26, samples=19 00:30:58.520 lat (msec) : 10=0.24%, 20=0.27%, 50=99.48% 00:30:58.520 cpu : usr=96.68%, sys=2.27%, ctx=87, majf=0, minf=42 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename1: (groupid=0, jobs=1): err= 0: pid=2406147: Mon Jun 10 12:19:46 2024 00:30:58.520 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10006msec) 00:30:58.520 slat (nsec): min=5909, max=70503, avg=28163.08, stdev=13247.21 00:30:58.520 clat (usec): min=9967, max=31540, avg=24096.51, stdev=1040.85 00:30:58.520 lat (usec): min=9974, max=31568, avg=24124.67, stdev=1040.24 00:30:58.520 clat percentiles (usec): 00:30:58.520 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.520 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.520 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.520 | 99.00th=[27132], 99.50th=[28705], 99.90th=[29754], 99.95th=[29754], 00:30:58.520 | 99.99th=[31589] 00:30:58.520 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2627.37, stdev=65.66, samples=19 00:30:58.520 iops : min= 640, max= 672, avg=656.84, stdev=16.42, samples=19 00:30:58.520 lat (msec) : 10=0.06%, 20=0.24%, 50=99.70% 00:30:58.520 cpu : usr=97.91%, sys=1.69%, ctx=20, majf=0, minf=36 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename2: (groupid=0, jobs=1): err= 0: pid=2406148: Mon Jun 10 12:19:46 2024 00:30:58.520 read: IOPS=655, BW=2621KiB/s (2684kB/s)(25.6MiB/10011msec) 00:30:58.520 slat (nsec): min=4951, max=83395, avg=32338.29, stdev=15965.87 00:30:58.520 clat (usec): min=13007, max=43479, avg=24117.92, stdev=1246.26 00:30:58.520 lat (usec): min=13015, max=43494, avg=24150.26, stdev=1246.66 00:30:58.520 clat percentiles (usec): 00:30:58.520 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.520 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.520 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.520 | 99.00th=[26084], 99.50th=[30278], 99.90th=[43254], 99.95th=[43254], 00:30:58.520 | 99.99th=[43254] 00:30:58.520 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2620.63, stdev=65.66, samples=19 00:30:58.520 iops : min= 640, max= 672, avg=655.16, stdev=16.42, samples=19 00:30:58.520 lat (msec) : 20=0.27%, 50=99.73% 00:30:58.520 cpu : usr=96.64%, sys=2.31%, ctx=141, majf=0, minf=36 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename2: (groupid=0, jobs=1): err= 0: pid=2406149: Mon Jun 10 12:19:46 2024 00:30:58.520 read: IOPS=655, BW=2621KiB/s (2684kB/s)(25.6MiB/10012msec) 00:30:58.520 slat (nsec): min=7114, max=82795, avg=31531.65, stdev=16979.03 00:30:58.520 clat (usec): min=17032, max=47451, avg=24147.31, stdev=1377.40 00:30:58.520 lat (usec): min=17052, max=47471, avg=24178.84, stdev=1377.49 00:30:58.520 clat percentiles (usec): 00:30:58.520 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.520 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.520 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.520 | 99.00th=[26084], 99.50th=[29754], 99.90th=[47449], 99.95th=[47449], 00:30:58.520 | 99.99th=[47449] 00:30:58.520 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2620.63, stdev=78.31, samples=19 00:30:58.520 iops : min= 608, max= 672, avg=655.16, stdev=19.58, samples=19 00:30:58.520 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.520 cpu : usr=97.92%, sys=1.70%, ctx=14, majf=0, minf=33 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename2: (groupid=0, jobs=1): err= 0: pid=2406150: Mon Jun 10 12:19:46 2024 00:30:58.520 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10004msec) 00:30:58.520 slat (nsec): min=4346, max=91971, avg=44760.49, stdev=17250.05 00:30:58.520 clat (usec): min=10182, max=52204, avg=24004.29, stdev=1567.50 00:30:58.520 lat (usec): min=10197, max=52219, avg=24049.05, stdev=1567.13 00:30:58.520 clat percentiles (usec): 00:30:58.520 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:30:58.520 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:30:58.520 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:30:58.520 | 99.00th=[26084], 99.50th=[29230], 99.90th=[47973], 99.95th=[47973], 00:30:58.520 | 99.99th=[52167] 00:30:58.520 bw ( KiB/s): min= 2432, max= 2688, per=4.15%, avg=2613.26, stdev=88.58, samples=19 00:30:58.520 iops : min= 608, max= 672, avg=653.26, stdev=22.14, samples=19 00:30:58.520 lat (msec) : 20=0.27%, 50=99.70%, 100=0.03% 00:30:58.520 cpu : usr=97.61%, sys=1.81%, ctx=129, majf=0, minf=30 00:30:58.520 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.520 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.520 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.520 filename2: (groupid=0, jobs=1): err= 0: pid=2406151: Mon Jun 10 12:19:46 2024 00:30:58.521 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10002msec) 00:30:58.521 slat (nsec): min=6432, max=91858, avg=42870.29, stdev=18826.30 00:30:58.521 clat (usec): min=10610, max=46040, avg=23971.92, stdev=1448.84 00:30:58.521 lat (usec): min=10631, max=46056, avg=24014.79, stdev=1449.52 00:30:58.521 clat percentiles (usec): 00:30:58.521 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:30:58.521 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:30:58.521 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:30:58.521 | 99.00th=[26084], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:30:58.521 | 99.99th=[45876] 00:30:58.521 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2619.89, stdev=88.46, samples=19 00:30:58.521 iops : min= 608, max= 672, avg=654.89, stdev=22.10, samples=19 00:30:58.521 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.521 cpu : usr=96.97%, sys=2.15%, ctx=64, majf=0, minf=40 00:30:58.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.521 filename2: (groupid=0, jobs=1): err= 0: pid=2406152: Mon Jun 10 12:19:46 2024 00:30:58.521 read: IOPS=655, BW=2624KiB/s (2687kB/s)(25.6MiB/10001msec) 00:30:58.521 slat (usec): min=6, max=127, avg=39.41, stdev=12.97 00:30:58.521 clat (usec): min=9897, max=46006, avg=24050.83, stdev=1459.77 00:30:58.521 lat (usec): min=9909, max=46022, avg=24090.25, stdev=1459.17 00:30:58.521 clat percentiles (usec): 00:30:58.521 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.521 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:30:58.521 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.521 | 99.00th=[26084], 99.50th=[29492], 99.90th=[45876], 99.95th=[45876], 00:30:58.521 | 99.99th=[45876] 00:30:58.521 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2619.89, stdev=88.46, samples=19 00:30:58.521 iops : min= 608, max= 672, avg=654.89, stdev=22.10, samples=19 00:30:58.521 lat (msec) : 10=0.06%, 20=0.18%, 50=99.76% 00:30:58.521 cpu : usr=97.80%, sys=1.79%, ctx=14, majf=0, minf=33 00:30:58.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.521 filename2: (groupid=0, jobs=1): err= 0: pid=2406153: Mon Jun 10 12:19:46 2024 00:30:58.521 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10006msec) 00:30:58.521 slat (nsec): min=6352, max=90501, avg=27717.79, stdev=16975.65 00:30:58.521 clat (usec): min=5838, max=29812, avg=24070.83, stdev=1403.08 00:30:58.521 lat (usec): min=5856, max=29826, avg=24098.55, stdev=1402.37 00:30:58.521 clat percentiles (usec): 00:30:58.521 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:30:58.521 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.521 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.521 | 99.00th=[26346], 99.50th=[28181], 99.90th=[29754], 99.95th=[29754], 00:30:58.521 | 99.99th=[29754] 00:30:58.521 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2634.11, stdev=88.64, samples=19 00:30:58.521 iops : min= 608, max= 704, avg=658.53, stdev=22.16, samples=19 00:30:58.521 lat (msec) : 10=0.49%, 20=0.24%, 50=99.27% 00:30:58.521 cpu : usr=97.37%, sys=2.00%, ctx=39, majf=0, minf=51 00:30:58.521 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.521 filename2: (groupid=0, jobs=1): err= 0: pid=2406154: Mon Jun 10 12:19:46 2024 00:30:58.521 read: IOPS=656, BW=2628KiB/s (2691kB/s)(25.7MiB/10010msec) 00:30:58.521 slat (nsec): min=6277, max=66119, avg=26108.78, stdev=11948.48 00:30:58.521 clat (usec): min=10097, max=34805, avg=24145.49, stdev=1077.38 00:30:58.521 lat (usec): min=10115, max=34830, avg=24171.60, stdev=1076.54 00:30:58.521 clat percentiles (usec): 00:30:58.521 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:30:58.521 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:30:58.521 | 70.00th=[24249], 80.00th=[24249], 90.00th=[25035], 95.00th=[25560], 00:30:58.521 | 99.00th=[26608], 99.50th=[28181], 99.90th=[33424], 99.95th=[33817], 00:30:58.521 | 99.99th=[34866] 00:30:58.521 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2627.37, stdev=65.66, samples=19 00:30:58.521 iops : min= 640, max= 672, avg=656.84, stdev=16.42, samples=19 00:30:58.521 lat (msec) : 20=0.29%, 50=99.71% 00:30:58.521 cpu : usr=97.95%, sys=1.66%, ctx=20, majf=0, minf=45 00:30:58.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:58.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.521 filename2: (groupid=0, jobs=1): err= 0: pid=2406155: Mon Jun 10 12:19:46 2024 00:30:58.521 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10002msec) 00:30:58.521 slat (nsec): min=8436, max=94383, avg=41580.36, stdev=17903.22 00:30:58.521 clat (usec): min=10622, max=46024, avg=23997.86, stdev=1446.06 00:30:58.521 lat (usec): min=10640, max=46039, avg=24039.44, stdev=1446.30 00:30:58.521 clat percentiles (usec): 00:30:58.521 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:30:58.521 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:30:58.521 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[25297], 00:30:58.521 | 99.00th=[26084], 99.50th=[29230], 99.90th=[45876], 99.95th=[45876], 00:30:58.521 | 99.99th=[45876] 00:30:58.521 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2619.89, stdev=88.46, samples=19 00:30:58.521 iops : min= 608, max= 672, avg=654.89, stdev=22.10, samples=19 00:30:58.521 lat (msec) : 20=0.24%, 50=99.76% 00:30:58.521 cpu : usr=97.23%, sys=2.13%, ctx=94, majf=0, minf=29 00:30:58.521 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:58.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.521 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.521 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:58.521 00:30:58.521 Run status group 0 (all jobs): 00:30:58.521 READ: bw=61.5MiB/s (64.5MB/s), 2621KiB/s-2646KiB/s (2684kB/s-2710kB/s), io=616MiB (646MB), run=10001-10015msec 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.521 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 bdev_null0 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 [2024-06-10 12:19:46.612091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 bdev_null1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:58.522 { 00:30:58.522 "params": { 00:30:58.522 "name": "Nvme$subsystem", 00:30:58.522 "trtype": "$TEST_TRANSPORT", 00:30:58.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.522 "adrfam": "ipv4", 00:30:58.522 "trsvcid": "$NVMF_PORT", 00:30:58.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.522 "hdgst": ${hdgst:-false}, 00:30:58.522 "ddgst": ${ddgst:-false} 00:30:58.522 }, 00:30:58.522 "method": "bdev_nvme_attach_controller" 00:30:58.522 } 00:30:58.522 EOF 00:30:58.522 )") 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:58.522 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:58.522 { 00:30:58.522 "params": { 00:30:58.522 "name": "Nvme$subsystem", 00:30:58.522 "trtype": "$TEST_TRANSPORT", 00:30:58.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.522 "adrfam": "ipv4", 00:30:58.522 "trsvcid": "$NVMF_PORT", 00:30:58.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.522 "hdgst": ${hdgst:-false}, 00:30:58.523 "ddgst": ${ddgst:-false} 00:30:58.523 }, 00:30:58.523 "method": "bdev_nvme_attach_controller" 00:30:58.523 } 00:30:58.523 EOF 00:30:58.523 )") 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:58.523 "params": { 00:30:58.523 "name": "Nvme0", 00:30:58.523 "trtype": "tcp", 00:30:58.523 "traddr": "10.0.0.2", 00:30:58.523 "adrfam": "ipv4", 00:30:58.523 "trsvcid": "4420", 00:30:58.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.523 "hdgst": false, 00:30:58.523 "ddgst": false 00:30:58.523 }, 00:30:58.523 "method": "bdev_nvme_attach_controller" 00:30:58.523 },{ 00:30:58.523 "params": { 00:30:58.523 "name": "Nvme1", 00:30:58.523 "trtype": "tcp", 00:30:58.523 "traddr": "10.0.0.2", 00:30:58.523 "adrfam": "ipv4", 00:30:58.523 "trsvcid": "4420", 00:30:58.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.523 "hdgst": false, 00:30:58.523 "ddgst": false 00:30:58.523 }, 00:30:58.523 "method": "bdev_nvme_attach_controller" 00:30:58.523 }' 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:58.523 12:19:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.523 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:58.523 ... 00:30:58.523 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:58.523 ... 00:30:58.523 fio-3.35 00:30:58.523 Starting 4 threads 00:30:58.523 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.842 00:31:03.842 filename0: (groupid=0, jobs=1): err= 0: pid=2408146: Mon Jun 10 12:19:52 2024 00:31:03.842 read: IOPS=2831, BW=22.1MiB/s (23.2MB/s)(111MiB/5004msec) 00:31:03.842 slat (nsec): min=3905, max=48689, avg=8479.12, stdev=2750.00 00:31:03.842 clat (usec): min=1194, max=50004, avg=2801.02, stdev=1251.43 00:31:03.842 lat (usec): min=1205, max=50016, avg=2809.50, stdev=1251.31 00:31:03.842 clat percentiles (usec): 00:31:03.842 | 1.00th=[ 1778], 5.00th=[ 2024], 10.00th=[ 2180], 20.00th=[ 2376], 00:31:03.842 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:31:03.842 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3589], 95.00th=[ 3982], 00:31:03.842 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5145], 99.95th=[50070], 00:31:03.842 | 99.99th=[50070] 00:31:03.842 bw ( KiB/s): min=20304, max=24048, per=25.78%, avg=22670.22, stdev=1094.17, samples=9 00:31:03.842 iops : min= 2538, max= 3006, avg=2833.78, stdev=136.77, samples=9 00:31:03.842 lat (msec) : 2=4.52%, 4=90.65%, 10=4.77%, 50=0.05%, 100=0.01% 00:31:03.842 cpu : usr=93.32%, sys=6.34%, ctx=10, majf=0, minf=9 00:31:03.842 IO depths : 1=0.2%, 2=2.9%, 4=68.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 issued rwts: total=14168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:03.842 filename0: (groupid=0, jobs=1): err= 0: pid=2408147: Mon Jun 10 12:19:52 2024 00:31:03.842 read: IOPS=2774, BW=21.7MiB/s (22.7MB/s)(108MiB/5002msec) 00:31:03.842 slat (nsec): min=5912, max=37369, avg=8527.59, stdev=2893.16 00:31:03.842 clat (usec): min=959, max=6350, avg=2858.93, stdev=586.46 00:31:03.842 lat (usec): min=969, max=6362, avg=2867.45, stdev=586.20 00:31:03.842 clat percentiles (usec): 00:31:03.842 | 1.00th=[ 1762], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2442], 00:31:03.842 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2835], 00:31:03.842 | 70.00th=[ 2933], 80.00th=[ 3130], 90.00th=[ 3785], 95.00th=[ 4146], 00:31:03.842 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5342], 00:31:03.842 | 99.99th=[ 6325] 00:31:03.842 bw ( KiB/s): min=20768, max=23664, per=25.25%, avg=22202.67, stdev=796.67, samples=9 00:31:03.842 iops : min= 2596, max= 2958, avg=2775.33, stdev=99.58, samples=9 00:31:03.842 lat (usec) : 1000=0.03% 00:31:03.842 lat (msec) : 2=2.91%, 4=90.50%, 10=6.56% 00:31:03.842 cpu : usr=92.56%, sys=7.12%, ctx=8, majf=0, minf=9 00:31:03.842 IO depths : 1=0.2%, 2=2.2%, 4=69.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 issued rwts: total=13879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:03.842 filename1: (groupid=0, jobs=1): err= 0: pid=2408148: Mon Jun 10 12:19:52 2024 00:31:03.842 read: IOPS=2689, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:31:03.842 slat (nsec): min=2788, max=33128, avg=8655.60, stdev=2973.08 00:31:03.842 clat (usec): min=870, max=6699, avg=2950.22, stdev=562.28 00:31:03.842 lat (usec): min=876, max=6706, avg=2958.87, stdev=562.12 00:31:03.842 clat percentiles (usec): 00:31:03.842 | 1.00th=[ 1893], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2540], 00:31:03.842 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2933], 00:31:03.842 | 70.00th=[ 3097], 80.00th=[ 3326], 90.00th=[ 3752], 95.00th=[ 4080], 00:31:03.842 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5735], 00:31:03.842 | 99.99th=[ 6718] 00:31:03.842 bw ( KiB/s): min=20880, max=22640, per=24.54%, avg=21580.44, stdev=580.30, samples=9 00:31:03.842 iops : min= 2610, max= 2830, avg=2697.56, stdev=72.54, samples=9 00:31:03.842 lat (usec) : 1000=0.20% 00:31:03.842 lat (msec) : 2=1.64%, 4=92.26%, 10=5.90% 00:31:03.842 cpu : usr=92.78%, sys=6.90%, ctx=12, majf=0, minf=9 00:31:03.842 IO depths : 1=0.3%, 2=3.0%, 4=68.0%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 issued rwts: total=13449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:03.842 filename1: (groupid=0, jobs=1): err= 0: pid=2408149: Mon Jun 10 12:19:52 2024 00:31:03.842 read: IOPS=2701, BW=21.1MiB/s (22.1MB/s)(106MiB/5001msec) 00:31:03.842 slat (nsec): min=5930, max=34464, avg=8710.32, stdev=3073.93 00:31:03.842 clat (usec): min=992, max=5662, avg=2936.43, stdev=576.00 00:31:03.842 lat (usec): min=1004, max=5674, avg=2945.14, stdev=575.63 00:31:03.842 clat percentiles (usec): 00:31:03.842 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:31:03.842 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2900], 00:31:03.842 | 70.00th=[ 2999], 80.00th=[ 3228], 90.00th=[ 3884], 95.00th=[ 4178], 00:31:03.842 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5473], 00:31:03.842 | 99.99th=[ 5669] 00:31:03.842 bw ( KiB/s): min=20864, max=22384, per=24.51%, avg=21552.00, stdev=533.73, samples=9 00:31:03.842 iops : min= 2608, max= 2798, avg=2694.00, stdev=66.72, samples=9 00:31:03.842 lat (usec) : 1000=0.01% 00:31:03.842 lat (msec) : 2=1.69%, 4=90.72%, 10=7.59% 00:31:03.842 cpu : usr=93.46%, sys=6.22%, ctx=10, majf=0, minf=9 00:31:03.842 IO depths : 1=0.4%, 2=2.2%, 4=69.7%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.842 issued rwts: total=13511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:03.842 00:31:03.842 Run status group 0 (all jobs): 00:31:03.842 READ: bw=85.9MiB/s (90.1MB/s), 21.0MiB/s-22.1MiB/s (22.0MB/s-23.2MB/s), io=430MiB (451MB), run=5001-5004msec 00:31:03.842 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:03.842 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:03.842 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.842 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.842 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 00:31:03.843 real 0m24.163s 00:31:03.843 user 4m53.816s 00:31:03.843 sys 0m8.455s 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 ************************************ 00:31:03.843 END TEST fio_dif_rand_params 00:31:03.843 ************************************ 00:31:03.843 12:19:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:03.843 12:19:52 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:03.843 12:19:52 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 ************************************ 00:31:03.843 START TEST fio_dif_digest 00:31:03.843 ************************************ 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 bdev_null0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:03.843 [2024-06-10 12:19:53.007200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.843 { 00:31:03.843 "params": { 00:31:03.843 "name": "Nvme$subsystem", 00:31:03.843 "trtype": "$TEST_TRANSPORT", 00:31:03.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.843 "adrfam": "ipv4", 00:31:03.843 "trsvcid": "$NVMF_PORT", 00:31:03.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.843 "hdgst": ${hdgst:-false}, 00:31:03.843 "ddgst": ${ddgst:-false} 00:31:03.843 }, 00:31:03.843 "method": "bdev_nvme_attach_controller" 00:31:03.843 } 00:31:03.843 EOF 00:31:03.843 )") 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:03.843 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:03.844 "params": { 00:31:03.844 "name": "Nvme0", 00:31:03.844 "trtype": "tcp", 00:31:03.844 "traddr": "10.0.0.2", 00:31:03.844 "adrfam": "ipv4", 00:31:03.844 "trsvcid": "4420", 00:31:03.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.844 "hdgst": true, 00:31:03.844 "ddgst": true 00:31:03.844 }, 00:31:03.844 "method": "bdev_nvme_attach_controller" 00:31:03.844 }' 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.844 12:19:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.101 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:04.101 ... 00:31:04.101 fio-3.35 00:31:04.101 Starting 3 threads 00:31:04.101 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.298 00:31:16.298 filename0: (groupid=0, jobs=1): err= 0: pid=2409362: Mon Jun 10 12:20:03 2024 00:31:16.298 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(383MiB/10045msec) 00:31:16.298 slat (nsec): min=6211, max=25465, avg=11055.19, stdev=2248.73 00:31:16.298 clat (usec): min=7321, max=45282, avg=9788.10, stdev=938.28 00:31:16.298 lat (usec): min=7328, max=45295, avg=9799.15, stdev=938.28 00:31:16.298 clat percentiles (usec): 00:31:16.298 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:16.298 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:31:16.298 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10552], 95.00th=[10945], 00:31:16.298 | 99.00th=[11469], 99.50th=[11600], 99.90th=[13173], 99.95th=[13304], 00:31:16.298 | 99.99th=[45351] 00:31:16.298 bw ( KiB/s): min=38400, max=40192, per=35.62%, avg=39232.00, stdev=476.22, samples=20 00:31:16.298 iops : min= 300, max= 314, avg=306.50, stdev= 3.72, samples=20 00:31:16.298 lat (msec) : 10=61.90%, 20=38.06%, 50=0.03% 00:31:16.298 cpu : usr=90.41%, sys=9.17%, ctx=21, majf=0, minf=83 00:31:16.298 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.298 issued rwts: total=3066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.298 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:16.298 filename0: (groupid=0, jobs=1): err= 0: pid=2409363: Mon Jun 10 12:20:03 2024 00:31:16.298 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(352MiB/10043msec) 00:31:16.298 slat (nsec): min=6213, max=64779, avg=10932.96, stdev=2545.34 00:31:16.298 clat (usec): min=8082, max=49166, avg=10652.39, stdev=1048.57 00:31:16.298 lat (usec): min=8094, max=49176, avg=10663.32, stdev=1048.53 00:31:16.298 clat percentiles (usec): 00:31:16.298 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:31:16.298 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:31:16.298 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:31:16.298 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13960], 99.95th=[14484], 00:31:16.298 | 99.99th=[49021] 00:31:16.298 bw ( KiB/s): min=34816, max=36608, per=32.72%, avg=36044.80, stdev=495.57, samples=20 00:31:16.298 iops : min= 272, max= 286, avg=281.60, stdev= 3.87, samples=20 00:31:16.298 lat (msec) : 10=19.28%, 20=80.69%, 50=0.04% 00:31:16.298 cpu : usr=91.40%, sys=8.22%, ctx=29, majf=0, minf=149 00:31:16.298 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.298 issued rwts: total=2817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.298 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:16.298 filename0: (groupid=0, jobs=1): err= 0: pid=2409364: Mon Jun 10 12:20:03 2024 00:31:16.298 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(345MiB/10044msec) 00:31:16.298 slat (nsec): min=6258, max=88267, avg=11221.99, stdev=2480.94 00:31:16.298 clat (usec): min=7011, max=44919, avg=10885.16, stdev=1208.39 00:31:16.298 lat (usec): min=7023, max=44931, avg=10896.38, stdev=1208.40 00:31:16.298 clat percentiles (usec): 00:31:16.298 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:31:16.298 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:31:16.298 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:31:16.298 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14353], 99.95th=[44827], 00:31:16.298 | 99.99th=[44827] 00:31:16.298 bw ( KiB/s): min=34304, max=36352, per=32.06%, avg=35318.70, stdev=522.17, samples=20 00:31:16.298 iops : min= 268, max= 284, avg=275.90, stdev= 4.08, samples=20 00:31:16.298 lat (msec) : 10=11.88%, 20=88.05%, 50=0.07% 00:31:16.298 cpu : usr=91.54%, sys=8.16%, ctx=28, majf=0, minf=121 00:31:16.298 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.298 issued rwts: total=2761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.299 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:16.299 00:31:16.299 Run status group 0 (all jobs): 00:31:16.299 READ: bw=108MiB/s (113MB/s), 34.4MiB/s-38.2MiB/s (36.0MB/s-40.0MB/s), io=1081MiB (1133MB), run=10043-10045msec 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.299 00:31:16.299 real 0m11.110s 00:31:16.299 user 0m35.466s 00:31:16.299 sys 0m2.923s 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:16.299 12:20:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:16.299 ************************************ 00:31:16.299 END TEST fio_dif_digest 00:31:16.299 ************************************ 00:31:16.299 12:20:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:16.299 12:20:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:16.299 rmmod nvme_tcp 00:31:16.299 rmmod nvme_fabrics 00:31:16.299 rmmod nvme_keyring 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2400458 ']' 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2400458 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 2400458 ']' 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 2400458 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2400458 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2400458' 00:31:16.299 killing process with pid 2400458 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@968 -- # kill 2400458 00:31:16.299 12:20:04 nvmf_dif -- common/autotest_common.sh@973 -- # wait 2400458 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:16.299 12:20:04 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:18.198 Waiting for block devices as requested 00:31:18.198 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:18.198 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:18.198 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:18.456 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:18.456 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:18.456 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:18.714 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:18.714 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:18.714 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:18.714 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:18.973 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:18.973 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:18.973 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:19.231 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:19.231 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:19.231 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:19.490 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:19.490 12:20:08 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:19.490 12:20:08 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:19.490 12:20:08 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.490 12:20:08 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:19.490 12:20:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.490 12:20:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:19.490 12:20:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.022 12:20:11 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.022 00:31:22.022 real 1m15.718s 00:31:22.022 user 7m12.910s 00:31:22.022 sys 0m28.820s 00:31:22.022 12:20:11 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:22.022 12:20:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:22.022 ************************************ 00:31:22.022 END TEST nvmf_dif 00:31:22.022 ************************************ 00:31:22.022 12:20:11 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:22.022 12:20:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:22.022 12:20:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:22.022 12:20:11 -- common/autotest_common.sh@10 -- # set +x 00:31:22.022 ************************************ 00:31:22.022 START TEST nvmf_abort_qd_sizes 00:31:22.022 ************************************ 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:22.022 * Looking for test storage... 00:31:22.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.022 12:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.589 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:28.590 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:28.590 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:28.590 Found net devices under 0000:af:00.0: cvl_0_0 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:28.590 Found net devices under 0000:af:00.1: cvl_0_1 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:28.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:31:28.590 00:31:28.590 --- 10.0.0.2 ping statistics --- 00:31:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.590 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:28.590 00:31:28.590 --- 10.0.0.1 ping statistics --- 00:31:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.590 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:28.590 12:20:17 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:31.874 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:31.874 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:33.248 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2417658 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2417658 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 2417658 ']' 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:33.248 12:20:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:33.248 [2024-06-10 12:20:22.658331] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:31:33.248 [2024-06-10 12:20:22.658383] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.248 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.248 [2024-06-10 12:20:22.732812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.507 [2024-06-10 12:20:22.812335] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.507 [2024-06-10 12:20:22.812370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.507 [2024-06-10 12:20:22.812380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.507 [2024-06-10 12:20:22.812389] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.507 [2024-06-10 12:20:22.812396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.507 [2024-06-10 12:20:22.812434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.507 [2024-06-10 12:20:22.812550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.507 [2024-06-10 12:20:22.812573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.507 [2024-06-10 12:20:22.812575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:34.075 12:20:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:34.075 ************************************ 00:31:34.075 START TEST spdk_target_abort 00:31:34.075 ************************************ 00:31:34.075 12:20:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:31:34.075 12:20:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:34.075 12:20:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:31:34.075 12:20:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.075 12:20:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.364 spdk_targetn1 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.364 [2024-06-10 12:20:26.413238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.364 [2024-06-10 12:20:26.449502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:37.364 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:37.365 12:20:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.365 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.651 Initializing NVMe Controllers 00:31:40.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.651 Initialization complete. Launching workers. 00:31:40.651 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16184, failed: 0 00:31:40.651 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1410, failed to submit 14774 00:31:40.651 success 747, unsuccess 663, failed 0 00:31:40.651 12:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:40.651 12:20:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.651 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.939 Initializing NVMe Controllers 00:31:43.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.939 Initialization complete. Launching workers. 00:31:43.939 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8429, failed: 0 00:31:43.939 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7194 00:31:43.939 success 325, unsuccess 910, failed 0 00:31:43.939 12:20:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.939 12:20:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.939 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.224 Initializing NVMe Controllers 00:31:47.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:47.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:47.224 Initialization complete. Launching workers. 00:31:47.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39540, failed: 0 00:31:47.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2841, failed to submit 36699 00:31:47.224 success 573, unsuccess 2268, failed 0 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.224 12:20:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2417658 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 2417658 ']' 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 2417658 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2417658 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2417658' 00:31:48.599 killing process with pid 2417658 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 2417658 00:31:48.599 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 2417658 00:31:48.858 00:31:48.858 real 0m14.695s 00:31:48.858 user 0m58.107s 00:31:48.858 sys 0m2.844s 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.858 ************************************ 00:31:48.858 END TEST spdk_target_abort 00:31:48.858 ************************************ 00:31:48.858 12:20:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:48.858 12:20:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:48.858 12:20:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:48.858 12:20:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:48.858 ************************************ 00:31:48.858 START TEST kernel_target_abort 00:31:48.858 ************************************ 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:48.858 12:20:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.439 Waiting for block devices as requested 00:31:51.697 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:51.697 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:51.697 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:51.955 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:51.955 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:51.955 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:52.214 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:52.214 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:52.214 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:52.214 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:52.472 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:52.472 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:52.472 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:52.730 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:52.730 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:52.730 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:52.988 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:52.988 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:53.247 No valid GPT data, bailing 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:31:53.247 00:31:53.247 Discovery Log Number of Records 2, Generation counter 2 00:31:53.247 =====Discovery Log Entry 0====== 00:31:53.247 trtype: tcp 00:31:53.247 adrfam: ipv4 00:31:53.247 subtype: current discovery subsystem 00:31:53.247 treq: not specified, sq flow control disable supported 00:31:53.247 portid: 1 00:31:53.247 trsvcid: 4420 00:31:53.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:53.247 traddr: 10.0.0.1 00:31:53.247 eflags: none 00:31:53.247 sectype: none 00:31:53.247 =====Discovery Log Entry 1====== 00:31:53.247 trtype: tcp 00:31:53.247 adrfam: ipv4 00:31:53.247 subtype: nvme subsystem 00:31:53.247 treq: not specified, sq flow control disable supported 00:31:53.247 portid: 1 00:31:53.247 trsvcid: 4420 00:31:53.247 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:53.247 traddr: 10.0.0.1 00:31:53.247 eflags: none 00:31:53.247 sectype: none 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:53.247 12:20:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.247 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.529 Initializing NVMe Controllers 00:31:56.529 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:56.530 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:56.530 Initialization complete. Launching workers. 00:31:56.530 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88362, failed: 0 00:31:56.530 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 88362, failed to submit 0 00:31:56.530 success 0, unsuccess 88362, failed 0 00:31:56.530 12:20:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:56.530 12:20:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:56.530 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.810 Initializing NVMe Controllers 00:31:59.810 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:59.810 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:59.810 Initialization complete. Launching workers. 00:31:59.810 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143211, failed: 0 00:31:59.810 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35818, failed to submit 107393 00:31:59.810 success 0, unsuccess 35818, failed 0 00:31:59.810 12:20:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:59.810 12:20:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:59.810 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.092 Initializing NVMe Controllers 00:32:03.092 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.092 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.092 Initialization complete. Launching workers. 00:32:03.092 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137052, failed: 0 00:32:03.092 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34290, failed to submit 102762 00:32:03.092 success 0, unsuccess 34290, failed 0 00:32:03.092 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:03.092 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:03.092 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:03.092 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.093 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:03.093 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:03.093 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.093 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:03.093 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:03.093 12:20:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:05.620 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:05.620 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:06.995 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:06.995 00:32:06.995 real 0m18.162s 00:32:06.995 user 0m7.996s 00:32:06.995 sys 0m5.449s 00:32:06.995 12:20:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:06.995 12:20:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.995 ************************************ 00:32:06.995 END TEST kernel_target_abort 00:32:06.995 ************************************ 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.253 rmmod nvme_tcp 00:32:07.253 rmmod nvme_fabrics 00:32:07.253 rmmod nvme_keyring 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2417658 ']' 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2417658 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 2417658 ']' 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 2417658 00:32:07.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2417658) - No such process 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 2417658 is not found' 00:32:07.253 Process with pid 2417658 is not found 00:32:07.253 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:07.254 12:20:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:10.533 Waiting for block devices as requested 00:32:10.533 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:10.533 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:10.792 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:10.792 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:10.792 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:11.051 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:11.051 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:11.051 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:11.310 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:11.310 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:11.572 12:21:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.476 12:21:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:13.476 00:32:13.476 real 0m51.812s 00:32:13.476 user 1m10.330s 00:32:13.476 sys 0m18.074s 00:32:13.476 12:21:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:13.476 12:21:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:13.476 ************************************ 00:32:13.476 END TEST nvmf_abort_qd_sizes 00:32:13.476 ************************************ 00:32:13.476 12:21:02 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:13.476 12:21:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:13.476 12:21:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:13.476 12:21:02 -- common/autotest_common.sh@10 -- # set +x 00:32:13.735 ************************************ 00:32:13.735 START TEST keyring_file 00:32:13.735 ************************************ 00:32:13.735 12:21:02 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:13.735 * Looking for test storage... 00:32:13.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.735 12:21:03 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.735 12:21:03 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.735 12:21:03 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.735 12:21:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.735 12:21:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.735 12:21:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.735 12:21:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:13.735 12:21:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:13.735 12:21:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5qDCgIeX8K 00:32:13.735 12:21:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:13.735 12:21:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5qDCgIeX8K 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5qDCgIeX8K 00:32:13.736 12:21:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5qDCgIeX8K 00:32:13.736 12:21:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5pO9qxrEKG 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:13.736 12:21:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5pO9qxrEKG 00:32:13.736 12:21:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5pO9qxrEKG 00:32:13.736 12:21:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5pO9qxrEKG 00:32:13.736 12:21:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=2426885 00:32:13.736 12:21:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2426885 00:32:13.736 12:21:03 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:13.736 12:21:03 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2426885 ']' 00:32:13.736 12:21:03 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.736 12:21:03 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:13.736 12:21:03 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.736 12:21:03 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:13.736 12:21:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:13.995 [2024-06-10 12:21:03.303235] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:32:13.995 [2024-06-10 12:21:03.303292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426885 ] 00:32:13.995 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.995 [2024-06-10 12:21:03.372678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.995 [2024-06-10 12:21:03.447134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:14.931 12:21:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:14.931 [2024-06-10 12:21:04.105669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.931 null0 00:32:14.931 [2024-06-10 12:21:04.137710] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:14.931 [2024-06-10 12:21:04.138074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:14.931 [2024-06-10 12:21:04.145736] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.931 12:21:04 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:14.931 12:21:04 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:14.932 [2024-06-10 12:21:04.153751] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:14.932 request: 00:32:14.932 { 00:32:14.932 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.932 "secure_channel": false, 00:32:14.932 "listen_address": { 00:32:14.932 "trtype": "tcp", 00:32:14.932 "traddr": "127.0.0.1", 00:32:14.932 "trsvcid": "4420" 00:32:14.932 }, 00:32:14.932 "method": "nvmf_subsystem_add_listener", 00:32:14.932 "req_id": 1 00:32:14.932 } 00:32:14.932 Got JSON-RPC error response 00:32:14.932 response: 00:32:14.932 { 00:32:14.932 "code": -32602, 00:32:14.932 "message": "Invalid parameters" 00:32:14.932 } 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:14.932 12:21:04 keyring_file -- keyring/file.sh@46 -- # bperfpid=2427138 00:32:14.932 12:21:04 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2427138 /var/tmp/bperf.sock 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2427138 ']' 00:32:14.932 12:21:04 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:14.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:14.932 12:21:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:14.932 [2024-06-10 12:21:04.191388] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:32:14.932 [2024-06-10 12:21:04.191433] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427138 ] 00:32:14.932 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.932 [2024-06-10 12:21:04.261356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.932 [2024-06-10 12:21:04.335307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.499 12:21:04 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:15.499 12:21:04 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:15.499 12:21:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:15.499 12:21:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:15.758 12:21:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5pO9qxrEKG 00:32:15.758 12:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5pO9qxrEKG 00:32:16.016 12:21:05 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:16.016 12:21:05 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:16.016 12:21:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.016 12:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.016 12:21:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.016 12:21:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.5qDCgIeX8K == \/\t\m\p\/\t\m\p\.\5\q\D\C\g\I\e\X\8\K ]] 00:32:16.016 12:21:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:16.016 12:21:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:16.016 12:21:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.016 12:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.016 12:21:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:16.275 12:21:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5pO9qxrEKG == \/\t\m\p\/\t\m\p\.\5\p\O\9\q\x\r\E\K\G ]] 00:32:16.275 12:21:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:16.275 12:21:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.275 12:21:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.275 12:21:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.275 12:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.275 12:21:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.534 12:21:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:16.534 12:21:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:16.534 12:21:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:16.534 12:21:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.534 12:21:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.534 12:21:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.534 12:21:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:16.793 12:21:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:16.793 12:21:06 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.793 12:21:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.793 [2024-06-10 12:21:06.213241] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:16.793 nvme0n1 00:32:16.793 12:21:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:16.793 12:21:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.793 12:21:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.793 12:21:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.793 12:21:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.793 12:21:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.052 12:21:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:17.052 12:21:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:17.052 12:21:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:17.052 12:21:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.052 12:21:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.052 12:21:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.052 12:21:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.311 12:21:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:17.311 12:21:06 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:17.311 Running I/O for 1 seconds... 00:32:18.696 00:32:18.696 Latency(us) 00:32:18.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.696 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:18.696 nvme0n1 : 1.00 17144.05 66.97 0.00 0.00 7449.26 3853.52 17511.22 00:32:18.696 =================================================================================================================== 00:32:18.696 Total : 17144.05 66.97 0.00 0.00 7449.26 3853.52 17511.22 00:32:18.696 0 00:32:18.696 12:21:07 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:18.696 12:21:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:18.696 12:21:07 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:18.696 12:21:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:18.696 12:21:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.696 12:21:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.696 12:21:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.696 12:21:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.696 12:21:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:18.696 12:21:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:18.696 12:21:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:18.696 12:21:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:18.696 12:21:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:18.696 12:21:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:18.696 12:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.998 12:21:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:18.998 12:21:08 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:18.998 12:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:18.998 [2024-06-10 12:21:08.480187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:18.998 [2024-06-10 12:21:08.480696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3200 (107): Transport endpoint is not connected 00:32:18.998 [2024-06-10 12:21:08.481691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe3200 (9): Bad file descriptor 00:32:18.998 [2024-06-10 12:21:08.482692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:18.998 [2024-06-10 12:21:08.482702] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:18.998 [2024-06-10 12:21:08.482711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:18.998 request: 00:32:18.998 { 00:32:18.998 "name": "nvme0", 00:32:18.998 "trtype": "tcp", 00:32:18.998 "traddr": "127.0.0.1", 00:32:18.998 "adrfam": "ipv4", 00:32:18.998 "trsvcid": "4420", 00:32:18.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:18.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:18.998 "prchk_reftag": false, 00:32:18.998 "prchk_guard": false, 00:32:18.998 "hdgst": false, 00:32:18.998 "ddgst": false, 00:32:18.998 "psk": "key1", 00:32:18.998 "method": "bdev_nvme_attach_controller", 00:32:18.998 "req_id": 1 00:32:18.998 } 00:32:18.998 Got JSON-RPC error response 00:32:18.998 response: 00:32:18.998 { 00:32:18.998 "code": -5, 00:32:18.998 "message": "Input/output error" 00:32:18.998 } 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:18.998 12:21:08 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:18.998 12:21:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:18.998 12:21:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:18.998 12:21:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.295 12:21:08 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:19.295 12:21:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.295 12:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.553 12:21:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:19.553 12:21:08 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:19.553 12:21:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:19.553 12:21:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:19.553 12:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:19.811 12:21:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:19.811 12:21:09 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:19.811 12:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.072 12:21:09 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:20.072 12:21:09 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.5qDCgIeX8K 00:32:20.072 12:21:09 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:20.072 12:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:20.072 [2024-06-10 12:21:09.506661] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5qDCgIeX8K': 0100660 00:32:20.072 [2024-06-10 12:21:09.506690] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:20.072 request: 00:32:20.072 { 00:32:20.072 "name": "key0", 00:32:20.072 "path": "/tmp/tmp.5qDCgIeX8K", 00:32:20.072 "method": "keyring_file_add_key", 00:32:20.072 "req_id": 1 00:32:20.072 } 00:32:20.072 Got JSON-RPC error response 00:32:20.072 response: 00:32:20.072 { 00:32:20.072 "code": -1, 00:32:20.072 "message": "Operation not permitted" 00:32:20.072 } 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:20.072 12:21:09 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:20.073 12:21:09 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:20.073 12:21:09 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.5qDCgIeX8K 00:32:20.073 12:21:09 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:20.073 12:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5qDCgIeX8K 00:32:20.331 12:21:09 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.5qDCgIeX8K 00:32:20.331 12:21:09 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:20.331 12:21:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.331 12:21:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.331 12:21:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.331 12:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.331 12:21:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.589 12:21:09 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:20.589 12:21:09 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:20.589 12:21:09 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.589 12:21:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.589 [2024-06-10 12:21:10.048093] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5qDCgIeX8K': No such file or directory 00:32:20.589 [2024-06-10 12:21:10.048125] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:20.589 [2024-06-10 12:21:10.048147] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:20.589 [2024-06-10 12:21:10.048155] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:20.589 [2024-06-10 12:21:10.048163] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:20.589 request: 00:32:20.589 { 00:32:20.589 "name": "nvme0", 00:32:20.589 "trtype": "tcp", 00:32:20.589 "traddr": "127.0.0.1", 00:32:20.589 "adrfam": "ipv4", 00:32:20.589 "trsvcid": "4420", 00:32:20.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:20.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:20.589 "prchk_reftag": false, 00:32:20.589 "prchk_guard": false, 00:32:20.589 "hdgst": false, 00:32:20.589 "ddgst": false, 00:32:20.589 "psk": "key0", 00:32:20.589 "method": "bdev_nvme_attach_controller", 00:32:20.589 "req_id": 1 00:32:20.589 } 00:32:20.589 Got JSON-RPC error response 00:32:20.589 response: 00:32:20.589 { 00:32:20.589 "code": -19, 00:32:20.589 "message": "No such device" 00:32:20.589 } 00:32:20.589 12:21:10 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:20.589 12:21:10 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:20.589 12:21:10 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:20.589 12:21:10 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:20.589 12:21:10 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:20.589 12:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:20.846 12:21:10 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:20.846 12:21:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:20.846 12:21:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:20.846 12:21:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:20.846 12:21:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:20.846 12:21:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:20.847 12:21:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IcaM64tOYo 00:32:20.847 12:21:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:20.847 12:21:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:20.847 12:21:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:20.847 12:21:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:20.847 12:21:10 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:20.847 12:21:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:20.847 12:21:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:20.847 12:21:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IcaM64tOYo 00:32:20.847 12:21:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IcaM64tOYo 00:32:20.847 12:21:10 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IcaM64tOYo 00:32:20.847 12:21:10 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IcaM64tOYo 00:32:20.847 12:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IcaM64tOYo 00:32:21.104 12:21:10 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.104 12:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.361 nvme0n1 00:32:21.361 12:21:10 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:21.361 12:21:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.361 12:21:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.361 12:21:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.361 12:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.361 12:21:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.361 12:21:10 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:21.361 12:21:10 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:21.361 12:21:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:21.627 12:21:11 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:21.627 12:21:11 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:21.627 12:21:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.627 12:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.627 12:21:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.884 12:21:11 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:21.884 12:21:11 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:21.884 12:21:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.884 12:21:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.884 12:21:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.884 12:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.884 12:21:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.884 12:21:11 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:21.884 12:21:11 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:21.884 12:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:22.140 12:21:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:22.140 12:21:11 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:22.140 12:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.397 12:21:11 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:22.397 12:21:11 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IcaM64tOYo 00:32:22.397 12:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IcaM64tOYo 00:32:22.397 12:21:11 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5pO9qxrEKG 00:32:22.397 12:21:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5pO9qxrEKG 00:32:22.654 12:21:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.654 12:21:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.911 nvme0n1 00:32:22.911 12:21:12 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:22.912 12:21:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:23.169 12:21:12 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:23.169 "subsystems": [ 00:32:23.169 { 00:32:23.169 "subsystem": "keyring", 00:32:23.169 "config": [ 00:32:23.169 { 00:32:23.169 "method": "keyring_file_add_key", 00:32:23.169 "params": { 00:32:23.169 "name": "key0", 00:32:23.169 "path": "/tmp/tmp.IcaM64tOYo" 00:32:23.169 } 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "method": "keyring_file_add_key", 00:32:23.169 "params": { 00:32:23.169 "name": "key1", 00:32:23.169 "path": "/tmp/tmp.5pO9qxrEKG" 00:32:23.169 } 00:32:23.169 } 00:32:23.169 ] 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "subsystem": "iobuf", 00:32:23.169 "config": [ 00:32:23.169 { 00:32:23.169 "method": "iobuf_set_options", 00:32:23.169 "params": { 00:32:23.169 "small_pool_count": 8192, 00:32:23.169 "large_pool_count": 1024, 00:32:23.169 "small_bufsize": 8192, 00:32:23.169 "large_bufsize": 135168 00:32:23.169 } 00:32:23.169 } 00:32:23.169 ] 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "subsystem": "sock", 00:32:23.169 "config": [ 00:32:23.169 { 00:32:23.169 "method": "sock_set_default_impl", 00:32:23.169 "params": { 00:32:23.169 "impl_name": "posix" 00:32:23.169 } 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "method": "sock_impl_set_options", 00:32:23.169 "params": { 00:32:23.169 "impl_name": "ssl", 00:32:23.169 "recv_buf_size": 4096, 00:32:23.169 "send_buf_size": 4096, 00:32:23.169 "enable_recv_pipe": true, 00:32:23.169 "enable_quickack": false, 00:32:23.169 "enable_placement_id": 0, 00:32:23.169 "enable_zerocopy_send_server": true, 00:32:23.169 "enable_zerocopy_send_client": false, 00:32:23.169 "zerocopy_threshold": 0, 00:32:23.169 "tls_version": 0, 00:32:23.169 "enable_ktls": false 00:32:23.169 } 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "method": "sock_impl_set_options", 00:32:23.169 "params": { 00:32:23.169 "impl_name": "posix", 00:32:23.169 "recv_buf_size": 2097152, 00:32:23.169 "send_buf_size": 2097152, 00:32:23.169 "enable_recv_pipe": true, 00:32:23.169 "enable_quickack": false, 00:32:23.169 "enable_placement_id": 0, 00:32:23.169 "enable_zerocopy_send_server": true, 00:32:23.169 "enable_zerocopy_send_client": false, 00:32:23.169 "zerocopy_threshold": 0, 00:32:23.169 "tls_version": 0, 00:32:23.169 "enable_ktls": false 00:32:23.169 } 00:32:23.169 } 00:32:23.169 ] 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "subsystem": "vmd", 00:32:23.169 "config": [] 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "subsystem": "accel", 00:32:23.169 "config": [ 00:32:23.169 { 00:32:23.169 "method": "accel_set_options", 00:32:23.169 "params": { 00:32:23.169 "small_cache_size": 128, 00:32:23.169 "large_cache_size": 16, 00:32:23.169 "task_count": 2048, 00:32:23.169 "sequence_count": 2048, 00:32:23.169 "buf_count": 2048 00:32:23.169 } 00:32:23.169 } 00:32:23.169 ] 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "subsystem": "bdev", 00:32:23.169 "config": [ 00:32:23.169 { 00:32:23.169 "method": "bdev_set_options", 00:32:23.169 "params": { 00:32:23.169 "bdev_io_pool_size": 65535, 00:32:23.169 "bdev_io_cache_size": 256, 00:32:23.169 "bdev_auto_examine": true, 00:32:23.169 "iobuf_small_cache_size": 128, 00:32:23.169 "iobuf_large_cache_size": 16 00:32:23.169 } 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "method": "bdev_raid_set_options", 00:32:23.169 "params": { 00:32:23.169 "process_window_size_kb": 1024 00:32:23.169 } 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "method": "bdev_iscsi_set_options", 00:32:23.169 "params": { 00:32:23.169 "timeout_sec": 30 00:32:23.169 } 00:32:23.169 }, 00:32:23.169 { 00:32:23.169 "method": "bdev_nvme_set_options", 00:32:23.169 "params": { 00:32:23.169 "action_on_timeout": "none", 00:32:23.169 "timeout_us": 0, 00:32:23.169 "timeout_admin_us": 0, 00:32:23.169 "keep_alive_timeout_ms": 10000, 00:32:23.169 "arbitration_burst": 0, 00:32:23.169 "low_priority_weight": 0, 00:32:23.169 "medium_priority_weight": 0, 00:32:23.169 "high_priority_weight": 0, 00:32:23.169 "nvme_adminq_poll_period_us": 10000, 00:32:23.169 "nvme_ioq_poll_period_us": 0, 00:32:23.169 "io_queue_requests": 512, 00:32:23.169 "delay_cmd_submit": true, 00:32:23.169 "transport_retry_count": 4, 00:32:23.169 "bdev_retry_count": 3, 00:32:23.169 "transport_ack_timeout": 0, 00:32:23.169 "ctrlr_loss_timeout_sec": 0, 00:32:23.169 "reconnect_delay_sec": 0, 00:32:23.169 "fast_io_fail_timeout_sec": 0, 00:32:23.169 "disable_auto_failback": false, 00:32:23.169 "generate_uuids": false, 00:32:23.169 "transport_tos": 0, 00:32:23.169 "nvme_error_stat": false, 00:32:23.169 "rdma_srq_size": 0, 00:32:23.169 "io_path_stat": false, 00:32:23.170 "allow_accel_sequence": false, 00:32:23.170 "rdma_max_cq_size": 0, 00:32:23.170 "rdma_cm_event_timeout_ms": 0, 00:32:23.170 "dhchap_digests": [ 00:32:23.170 "sha256", 00:32:23.170 "sha384", 00:32:23.170 "sha512" 00:32:23.170 ], 00:32:23.170 "dhchap_dhgroups": [ 00:32:23.170 "null", 00:32:23.170 "ffdhe2048", 00:32:23.170 "ffdhe3072", 00:32:23.170 "ffdhe4096", 00:32:23.170 "ffdhe6144", 00:32:23.170 "ffdhe8192" 00:32:23.170 ] 00:32:23.170 } 00:32:23.170 }, 00:32:23.170 { 00:32:23.170 "method": "bdev_nvme_attach_controller", 00:32:23.170 "params": { 00:32:23.170 "name": "nvme0", 00:32:23.170 "trtype": "TCP", 00:32:23.170 "adrfam": "IPv4", 00:32:23.170 "traddr": "127.0.0.1", 00:32:23.170 "trsvcid": "4420", 00:32:23.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.170 "prchk_reftag": false, 00:32:23.170 "prchk_guard": false, 00:32:23.170 "ctrlr_loss_timeout_sec": 0, 00:32:23.170 "reconnect_delay_sec": 0, 00:32:23.170 "fast_io_fail_timeout_sec": 0, 00:32:23.170 "psk": "key0", 00:32:23.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.170 "hdgst": false, 00:32:23.170 "ddgst": false 00:32:23.170 } 00:32:23.170 }, 00:32:23.170 { 00:32:23.170 "method": "bdev_nvme_set_hotplug", 00:32:23.170 "params": { 00:32:23.170 "period_us": 100000, 00:32:23.170 "enable": false 00:32:23.170 } 00:32:23.170 }, 00:32:23.170 { 00:32:23.170 "method": "bdev_wait_for_examine" 00:32:23.170 } 00:32:23.170 ] 00:32:23.170 }, 00:32:23.170 { 00:32:23.170 "subsystem": "nbd", 00:32:23.170 "config": [] 00:32:23.170 } 00:32:23.170 ] 00:32:23.170 }' 00:32:23.170 12:21:12 keyring_file -- keyring/file.sh@114 -- # killprocess 2427138 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2427138 ']' 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2427138 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2427138 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2427138' 00:32:23.170 killing process with pid 2427138 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@968 -- # kill 2427138 00:32:23.170 Received shutdown signal, test time was about 1.000000 seconds 00:32:23.170 00:32:23.170 Latency(us) 00:32:23.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.170 =================================================================================================================== 00:32:23.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.170 12:21:12 keyring_file -- common/autotest_common.sh@973 -- # wait 2427138 00:32:23.427 12:21:12 keyring_file -- keyring/file.sh@117 -- # bperfpid=2428938 00:32:23.427 12:21:12 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2428938 /var/tmp/bperf.sock 00:32:23.427 12:21:12 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2428938 ']' 00:32:23.427 12:21:12 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.427 12:21:12 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:23.427 12:21:12 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:23.427 12:21:12 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.427 12:21:12 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:23.427 12:21:12 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:23.427 "subsystems": [ 00:32:23.427 { 00:32:23.427 "subsystem": "keyring", 00:32:23.427 "config": [ 00:32:23.427 { 00:32:23.427 "method": "keyring_file_add_key", 00:32:23.427 "params": { 00:32:23.427 "name": "key0", 00:32:23.427 "path": "/tmp/tmp.IcaM64tOYo" 00:32:23.427 } 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "method": "keyring_file_add_key", 00:32:23.427 "params": { 00:32:23.427 "name": "key1", 00:32:23.427 "path": "/tmp/tmp.5pO9qxrEKG" 00:32:23.427 } 00:32:23.427 } 00:32:23.427 ] 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "subsystem": "iobuf", 00:32:23.427 "config": [ 00:32:23.427 { 00:32:23.427 "method": "iobuf_set_options", 00:32:23.427 "params": { 00:32:23.427 "small_pool_count": 8192, 00:32:23.427 "large_pool_count": 1024, 00:32:23.427 "small_bufsize": 8192, 00:32:23.427 "large_bufsize": 135168 00:32:23.427 } 00:32:23.427 } 00:32:23.427 ] 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "subsystem": "sock", 00:32:23.427 "config": [ 00:32:23.427 { 00:32:23.427 "method": "sock_set_default_impl", 00:32:23.427 "params": { 00:32:23.427 "impl_name": "posix" 00:32:23.427 } 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "method": "sock_impl_set_options", 00:32:23.427 "params": { 00:32:23.427 "impl_name": "ssl", 00:32:23.427 "recv_buf_size": 4096, 00:32:23.427 "send_buf_size": 4096, 00:32:23.427 "enable_recv_pipe": true, 00:32:23.427 "enable_quickack": false, 00:32:23.427 "enable_placement_id": 0, 00:32:23.427 "enable_zerocopy_send_server": true, 00:32:23.427 "enable_zerocopy_send_client": false, 00:32:23.427 "zerocopy_threshold": 0, 00:32:23.427 "tls_version": 0, 00:32:23.427 "enable_ktls": false 00:32:23.427 } 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "method": "sock_impl_set_options", 00:32:23.427 "params": { 00:32:23.427 "impl_name": "posix", 00:32:23.427 "recv_buf_size": 2097152, 00:32:23.427 "send_buf_size": 2097152, 00:32:23.427 "enable_recv_pipe": true, 00:32:23.427 "enable_quickack": false, 00:32:23.427 "enable_placement_id": 0, 00:32:23.427 "enable_zerocopy_send_server": true, 00:32:23.427 "enable_zerocopy_send_client": false, 00:32:23.427 "zerocopy_threshold": 0, 00:32:23.427 "tls_version": 0, 00:32:23.427 "enable_ktls": false 00:32:23.427 } 00:32:23.427 } 00:32:23.427 ] 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "subsystem": "vmd", 00:32:23.427 "config": [] 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "subsystem": "accel", 00:32:23.427 "config": [ 00:32:23.427 { 00:32:23.427 "method": "accel_set_options", 00:32:23.427 "params": { 00:32:23.427 "small_cache_size": 128, 00:32:23.427 "large_cache_size": 16, 00:32:23.427 "task_count": 2048, 00:32:23.427 "sequence_count": 2048, 00:32:23.427 "buf_count": 2048 00:32:23.427 } 00:32:23.427 } 00:32:23.427 ] 00:32:23.427 }, 00:32:23.427 { 00:32:23.427 "subsystem": "bdev", 00:32:23.427 "config": [ 00:32:23.427 { 00:32:23.428 "method": "bdev_set_options", 00:32:23.428 "params": { 00:32:23.428 "bdev_io_pool_size": 65535, 00:32:23.428 "bdev_io_cache_size": 256, 00:32:23.428 "bdev_auto_examine": true, 00:32:23.428 "iobuf_small_cache_size": 128, 00:32:23.428 "iobuf_large_cache_size": 16 00:32:23.428 } 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "method": "bdev_raid_set_options", 00:32:23.428 "params": { 00:32:23.428 "process_window_size_kb": 1024 00:32:23.428 } 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "method": "bdev_iscsi_set_options", 00:32:23.428 "params": { 00:32:23.428 "timeout_sec": 30 00:32:23.428 } 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "method": "bdev_nvme_set_options", 00:32:23.428 "params": { 00:32:23.428 "action_on_timeout": "none", 00:32:23.428 "timeout_us": 0, 00:32:23.428 "timeout_admin_us": 0, 00:32:23.428 "keep_alive_timeout_ms": 10000, 00:32:23.428 "arbitration_burst": 0, 00:32:23.428 "low_priority_weight": 0, 00:32:23.428 "medium_priority_weight": 0, 00:32:23.428 "high_priority_weight": 0, 00:32:23.428 "nvme_adminq_poll_period_us": 10000, 00:32:23.428 "nvme_ioq_poll_period_us": 0, 00:32:23.428 "io_queue_requests": 512, 00:32:23.428 "delay_cmd_submit": true, 00:32:23.428 "transport_retry_count": 4, 00:32:23.428 "bdev_retry_count": 3, 00:32:23.428 "transport_ack_timeout": 0, 00:32:23.428 "ctrlr_loss_timeout_sec": 0, 00:32:23.428 "reconnect_delay_sec": 0, 00:32:23.428 "fast_io_fail_timeout_sec": 0, 00:32:23.428 "disable_auto_failback": false, 00:32:23.428 "generate_uuids": false, 00:32:23.428 "transport_tos": 0, 00:32:23.428 "nvme_error_stat": false, 00:32:23.428 "rdma_srq_size": 0, 00:32:23.428 "io_path_stat": false, 00:32:23.428 "allow_accel_sequence": false, 00:32:23.428 "rdma_max_cq_size": 0, 00:32:23.428 "rdma_cm_event_timeout_ms": 0, 00:32:23.428 "dhchap_digests": [ 00:32:23.428 "sha256", 00:32:23.428 "sha384", 00:32:23.428 "sha512" 00:32:23.428 ], 00:32:23.428 "dhchap_dhgroups": [ 00:32:23.428 "null", 00:32:23.428 "ffdhe2048", 00:32:23.428 "ffdhe3072", 00:32:23.428 "ffdhe4096", 00:32:23.428 "ffdhe6144", 00:32:23.428 "ffdhe8192" 00:32:23.428 ] 00:32:23.428 } 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "method": "bdev_nvme_attach_controller", 00:32:23.428 "params": { 00:32:23.428 "name": "nvme0", 00:32:23.428 "trtype": "TCP", 00:32:23.428 "adrfam": "IPv4", 00:32:23.428 "traddr": "127.0.0.1", 00:32:23.428 "trsvcid": "4420", 00:32:23.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.428 "prchk_reftag": false, 00:32:23.428 "prchk_guard": false, 00:32:23.428 "ctrlr_loss_timeout_sec": 0, 00:32:23.428 "reconnect_delay_sec": 0, 00:32:23.428 "fast_io_fail_timeout_sec": 0, 00:32:23.428 "psk": "key0", 00:32:23.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.428 "hdgst": false, 00:32:23.428 "ddgst": false 00:32:23.428 } 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "method": "bdev_nvme_set_hotplug", 00:32:23.428 "params": { 00:32:23.428 "period_us": 100000, 00:32:23.428 "enable": false 00:32:23.428 } 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "method": "bdev_wait_for_examine" 00:32:23.428 } 00:32:23.428 ] 00:32:23.428 }, 00:32:23.428 { 00:32:23.428 "subsystem": "nbd", 00:32:23.428 "config": [] 00:32:23.428 } 00:32:23.428 ] 00:32:23.428 }' 00:32:23.428 12:21:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:23.428 [2024-06-10 12:21:12.825375] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:32:23.428 [2024-06-10 12:21:12.825429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428938 ] 00:32:23.428 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.428 [2024-06-10 12:21:12.894579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.684 [2024-06-10 12:21:12.969391] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.684 [2024-06-10 12:21:13.127158] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:24.246 12:21:13 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:24.246 12:21:13 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:24.246 12:21:13 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:24.246 12:21:13 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:24.246 12:21:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.501 12:21:13 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:24.501 12:21:13 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:24.501 12:21:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.501 12:21:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.501 12:21:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.501 12:21:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.501 12:21:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.502 12:21:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:24.502 12:21:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:24.502 12:21:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:24.502 12:21:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.502 12:21:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.502 12:21:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.502 12:21:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:24.758 12:21:14 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:24.758 12:21:14 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:24.758 12:21:14 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:24.758 12:21:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:25.015 12:21:14 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:25.015 12:21:14 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:25.015 12:21:14 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IcaM64tOYo /tmp/tmp.5pO9qxrEKG 00:32:25.015 12:21:14 keyring_file -- keyring/file.sh@20 -- # killprocess 2428938 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2428938 ']' 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2428938 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2428938 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2428938' 00:32:25.015 killing process with pid 2428938 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@968 -- # kill 2428938 00:32:25.015 Received shutdown signal, test time was about 1.000000 seconds 00:32:25.015 00:32:25.015 Latency(us) 00:32:25.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.015 =================================================================================================================== 00:32:25.015 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:25.015 12:21:14 keyring_file -- common/autotest_common.sh@973 -- # wait 2428938 00:32:25.272 12:21:14 keyring_file -- keyring/file.sh@21 -- # killprocess 2426885 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2426885 ']' 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2426885 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2426885 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2426885' 00:32:25.272 killing process with pid 2426885 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@968 -- # kill 2426885 00:32:25.272 [2024-06-10 12:21:14.613794] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:25.272 12:21:14 keyring_file -- common/autotest_common.sh@973 -- # wait 2426885 00:32:25.529 00:32:25.529 real 0m11.918s 00:32:25.529 user 0m27.760s 00:32:25.529 sys 0m3.394s 00:32:25.529 12:21:14 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:25.529 12:21:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:25.529 ************************************ 00:32:25.529 END TEST keyring_file 00:32:25.529 ************************************ 00:32:25.529 12:21:14 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:25.529 12:21:14 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:25.529 12:21:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:25.529 12:21:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:25.529 12:21:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.529 ************************************ 00:32:25.529 START TEST keyring_linux 00:32:25.529 ************************************ 00:32:25.529 12:21:14 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:25.785 * Looking for test storage... 00:32:25.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:25.785 12:21:15 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:25.785 12:21:15 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.785 12:21:15 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:25.785 12:21:15 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.786 12:21:15 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.786 12:21:15 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.786 12:21:15 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.786 12:21:15 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.786 12:21:15 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.786 12:21:15 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.786 12:21:15 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:25.786 12:21:15 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:25.786 /tmp/:spdk-test:key0 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:25.786 12:21:15 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:25.786 12:21:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:25.786 /tmp/:spdk-test:key1 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2429506 00:32:25.786 12:21:15 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2429506 00:32:25.786 12:21:15 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2429506 ']' 00:32:25.786 12:21:15 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.786 12:21:15 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:25.786 12:21:15 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.786 12:21:15 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:25.786 12:21:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.786 [2024-06-10 12:21:15.252453] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:32:25.786 [2024-06-10 12:21:15.252519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429506 ] 00:32:25.786 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.043 [2024-06-10 12:21:15.321578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.043 [2024-06-10 12:21:15.396026] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.605 12:21:16 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:26.605 12:21:16 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:26.605 12:21:16 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:26.605 12:21:16 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:26.605 12:21:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:26.605 [2024-06-10 12:21:16.063919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.605 null0 00:32:26.605 [2024-06-10 12:21:16.095980] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:26.605 [2024-06-10 12:21:16.096343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:26.605 12:21:16 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:26.605 12:21:16 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:26.605 127287075 00:32:26.605 12:21:16 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:26.605 337672104 00:32:26.605 12:21:16 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2429588 00:32:26.605 12:21:16 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:26.605 12:21:16 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2429588 /var/tmp/bperf.sock 00:32:26.862 12:21:16 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2429588 ']' 00:32:26.862 12:21:16 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.862 12:21:16 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:26.862 12:21:16 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.862 12:21:16 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:26.862 12:21:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:26.862 [2024-06-10 12:21:16.169764] Starting SPDK v24.09-pre git sha1 0a5aebcde / DPDK 24.03.0 initialization... 00:32:26.862 [2024-06-10 12:21:16.169812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429588 ] 00:32:26.862 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.862 [2024-06-10 12:21:16.240170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.862 [2024-06-10 12:21:16.313829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.792 12:21:16 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:27.792 12:21:16 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:27.792 12:21:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:27.792 12:21:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:27.792 12:21:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:27.792 12:21:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:28.049 12:21:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:28.049 12:21:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:28.049 [2024-06-10 12:21:17.493108] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:28.049 nvme0n1 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:28.306 12:21:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:28.306 12:21:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:28.306 12:21:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.306 12:21:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.306 12:21:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@25 -- # sn=127287075 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 127287075 == \1\2\7\2\8\7\0\7\5 ]] 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 127287075 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:28.562 12:21:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:28.562 Running I/O for 1 seconds... 00:32:29.932 00:32:29.932 Latency(us) 00:32:29.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.932 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:29.932 nvme0n1 : 1.01 18337.41 71.63 0.00 0.00 6951.57 5662.31 12006.20 00:32:29.932 =================================================================================================================== 00:32:29.932 Total : 18337.41 71.63 0.00 0.00 6951.57 5662.31 12006.20 00:32:29.932 0 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:29.932 12:21:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:29.932 12:21:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:29.932 12:21:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:29.932 12:21:19 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.932 12:21:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:30.189 [2024-06-10 12:21:19.576315] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:30.189 [2024-06-10 12:21:19.577062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf190 (107): Transport endpoint is not connected 00:32:30.189 [2024-06-10 12:21:19.578056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf190 (9): Bad file descriptor 00:32:30.189 [2024-06-10 12:21:19.579057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:30.189 [2024-06-10 12:21:19.579069] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:30.189 [2024-06-10 12:21:19.579078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:30.189 request: 00:32:30.189 { 00:32:30.189 "name": "nvme0", 00:32:30.189 "trtype": "tcp", 00:32:30.189 "traddr": "127.0.0.1", 00:32:30.189 "adrfam": "ipv4", 00:32:30.189 "trsvcid": "4420", 00:32:30.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:30.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:30.189 "prchk_reftag": false, 00:32:30.189 "prchk_guard": false, 00:32:30.189 "hdgst": false, 00:32:30.189 "ddgst": false, 00:32:30.189 "psk": ":spdk-test:key1", 00:32:30.189 "method": "bdev_nvme_attach_controller", 00:32:30.189 "req_id": 1 00:32:30.189 } 00:32:30.189 Got JSON-RPC error response 00:32:30.189 response: 00:32:30.189 { 00:32:30.189 "code": -5, 00:32:30.189 "message": "Input/output error" 00:32:30.189 } 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@33 -- # sn=127287075 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 127287075 00:32:30.189 1 links removed 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@33 -- # sn=337672104 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 337672104 00:32:30.189 1 links removed 00:32:30.189 12:21:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2429588 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2429588 ']' 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2429588 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2429588 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2429588' 00:32:30.189 killing process with pid 2429588 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@968 -- # kill 2429588 00:32:30.189 Received shutdown signal, test time was about 1.000000 seconds 00:32:30.189 00:32:30.189 Latency(us) 00:32:30.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.189 =================================================================================================================== 00:32:30.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.189 12:21:19 keyring_linux -- common/autotest_common.sh@973 -- # wait 2429588 00:32:30.447 12:21:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2429506 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2429506 ']' 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2429506 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2429506 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2429506' 00:32:30.447 killing process with pid 2429506 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@968 -- # kill 2429506 00:32:30.447 12:21:19 keyring_linux -- common/autotest_common.sh@973 -- # wait 2429506 00:32:30.705 00:32:30.705 real 0m5.201s 00:32:30.705 user 0m9.202s 00:32:30.705 sys 0m1.706s 00:32:30.705 12:21:20 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:30.705 12:21:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:30.705 ************************************ 00:32:30.705 END TEST keyring_linux 00:32:30.705 ************************************ 00:32:30.961 12:21:20 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:30.961 12:21:20 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:30.961 12:21:20 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:30.961 12:21:20 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:30.961 12:21:20 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:30.961 12:21:20 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:30.961 12:21:20 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:30.961 12:21:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:30.961 12:21:20 -- common/autotest_common.sh@10 -- # set +x 00:32:30.961 12:21:20 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:30.961 12:21:20 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:32:30.961 12:21:20 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:32:30.961 12:21:20 -- common/autotest_common.sh@10 -- # set +x 00:32:37.530 INFO: APP EXITING 00:32:37.530 INFO: killing all VMs 00:32:37.530 INFO: killing vhost app 00:32:37.530 INFO: EXIT DONE 00:32:40.063 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:40.063 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:40.322 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:40.322 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:40.322 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:32:43.602 Cleaning 00:32:43.602 Removing: /var/run/dpdk/spdk0/config 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:43.602 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:43.602 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:43.602 Removing: /var/run/dpdk/spdk1/config 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:43.602 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:43.602 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:43.602 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:43.602 Removing: /var/run/dpdk/spdk2/config 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:43.602 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:43.602 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:43.602 Removing: /var/run/dpdk/spdk3/config 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:43.602 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:43.602 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:43.602 Removing: /var/run/dpdk/spdk4/config 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:43.602 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:43.602 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:43.602 Removing: /dev/shm/bdev_svc_trace.1 00:32:43.602 Removing: /dev/shm/nvmf_trace.0 00:32:43.602 Removing: /dev/shm/spdk_tgt_trace.pid2024190 00:32:43.602 Removing: /var/run/dpdk/spdk0 00:32:43.602 Removing: /var/run/dpdk/spdk1 00:32:43.602 Removing: /var/run/dpdk/spdk2 00:32:43.602 Removing: /var/run/dpdk/spdk3 00:32:43.602 Removing: /var/run/dpdk/spdk4 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2021583 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2022841 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2024190 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2025308 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2026257 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2026452 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2027544 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2027724 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2027934 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2029659 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2031006 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2031343 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2031691 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2032068 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2032384 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2032547 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2032739 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2033055 00:32:43.602 Removing: /var/run/dpdk/spdk_pid2034055 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2037064 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2037451 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2037785 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2037926 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2038497 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2038745 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2039326 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2039343 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2039662 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2039909 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2040181 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2040221 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2040840 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2041008 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2041297 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2041524 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2041734 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2041848 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2042129 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2042415 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2042696 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2042980 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2043220 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2043441 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2043652 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2043892 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2044158 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2044443 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2044728 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2045012 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2045300 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2045580 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2045860 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2046108 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2046336 00:32:43.603 Removing: /var/run/dpdk/spdk_pid2046577 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2046789 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2047055 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2047362 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2047709 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2051539 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2098247 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2102777 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2113165 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2119560 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2123803 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2124348 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2136599 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2136601 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2137591 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2138450 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2139265 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2139898 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2140049 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2140290 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2140332 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2140334 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2141287 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2142186 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2143001 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2143620 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2143757 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2144034 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2145214 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2146341 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2155080 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2155368 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2160451 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2166537 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2169321 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2180132 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2189616 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2191465 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2192419 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2210568 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2214701 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2239789 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2244601 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2246769 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2248805 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2248940 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2249164 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2249432 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2250020 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2251917 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2253025 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2253597 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2255766 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2256584 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2257170 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2261480 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2271953 00:32:43.862 Removing: /var/run/dpdk/spdk_pid2276168 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2282346 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2283820 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2285340 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2290471 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2294746 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2302599 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2302698 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2307541 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2307737 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2307943 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2308351 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2308356 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2313083 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2313556 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2318189 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2321110 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2326704 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2332465 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2341767 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2349262 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2349267 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2368209 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2369000 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2369557 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2370285 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2371216 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2371763 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2372431 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2373117 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2377610 00:32:44.120 Removing: /var/run/dpdk/spdk_pid2377887 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2384020 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2384328 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2387109 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2395221 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2395226 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2400757 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2402747 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2404789 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2405896 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2408017 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2409143 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2418407 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2418944 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2419473 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2421912 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2422382 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2422832 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2426885 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2427138 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2428938 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2429506 00:32:44.121 Removing: /var/run/dpdk/spdk_pid2429588 00:32:44.121 Clean 00:32:44.378 12:21:33 -- common/autotest_common.sh@1450 -- # return 0 00:32:44.378 12:21:33 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:44.378 12:21:33 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:44.379 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.379 12:21:33 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:44.379 12:21:33 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:44.379 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.379 12:21:33 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:44.379 12:21:33 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:44.379 12:21:33 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:44.379 12:21:33 -- spdk/autotest.sh@391 -- # hash lcov 00:32:44.379 12:21:33 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:44.379 12:21:33 -- spdk/autotest.sh@393 -- # hostname 00:32:44.379 12:21:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:44.636 geninfo: WARNING: invalid characters removed from testname! 00:33:06.622 12:21:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.572 12:21:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:09.475 12:21:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.852 12:22:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:12.755 12:22:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:14.659 12:22:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.036 12:22:05 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:16.036 12:22:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.036 12:22:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:16.036 12:22:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.036 12:22:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.036 12:22:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.037 12:22:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.037 12:22:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.037 12:22:05 -- paths/export.sh@5 -- $ export PATH 00:33:16.037 12:22:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.037 12:22:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:16.037 12:22:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:16.037 12:22:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718014925.XXXXXX 00:33:16.037 12:22:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718014925.HESvw1 00:33:16.037 12:22:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:16.037 12:22:05 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:16.037 12:22:05 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:16.037 12:22:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:16.037 12:22:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:16.037 12:22:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:16.037 12:22:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:16.037 12:22:05 -- common/autotest_common.sh@10 -- $ set +x 00:33:16.037 12:22:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:16.037 12:22:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:16.037 12:22:05 -- pm/common@17 -- $ local monitor 00:33:16.037 12:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.037 12:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.037 12:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.037 12:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.037 12:22:05 -- pm/common@25 -- $ sleep 1 00:33:16.037 12:22:05 -- pm/common@21 -- $ date +%s 00:33:16.037 12:22:05 -- pm/common@21 -- $ date +%s 00:33:16.037 12:22:05 -- pm/common@21 -- $ date +%s 00:33:16.037 12:22:05 -- pm/common@21 -- $ date +%s 00:33:16.037 12:22:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718014925 00:33:16.037 12:22:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718014925 00:33:16.037 12:22:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718014925 00:33:16.037 12:22:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718014925 00:33:16.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718014925_collect-vmstat.pm.log 00:33:16.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718014925_collect-cpu-temp.pm.log 00:33:16.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718014925_collect-cpu-load.pm.log 00:33:16.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718014925_collect-bmc-pm.bmc.pm.log 00:33:16.973 12:22:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:16.973 12:22:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:33:16.973 12:22:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.973 12:22:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:16.973 12:22:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:16.973 12:22:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:16.973 12:22:06 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:16.973 12:22:06 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:16.973 12:22:06 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:17.232 12:22:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:17.232 12:22:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:17.232 12:22:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:17.232 12:22:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:17.232 12:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.232 12:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:17.232 12:22:06 -- pm/common@44 -- $ pid=2443467 00:33:17.232 12:22:06 -- pm/common@50 -- $ kill -TERM 2443467 00:33:17.232 12:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.232 12:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:17.232 12:22:06 -- pm/common@44 -- $ pid=2443468 00:33:17.233 12:22:06 -- pm/common@50 -- $ kill -TERM 2443468 00:33:17.233 12:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.233 12:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:17.233 12:22:06 -- pm/common@44 -- $ pid=2443469 00:33:17.233 12:22:06 -- pm/common@50 -- $ kill -TERM 2443469 00:33:17.233 12:22:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:17.233 12:22:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:17.233 12:22:06 -- pm/common@44 -- $ pid=2443492 00:33:17.233 12:22:06 -- pm/common@50 -- $ sudo -E kill -TERM 2443492 00:33:17.233 + [[ -n 1911890 ]] 00:33:17.233 + sudo kill 1911890 00:33:17.242 [Pipeline] } 00:33:17.260 [Pipeline] // stage 00:33:17.266 [Pipeline] } 00:33:17.284 [Pipeline] // timeout 00:33:17.289 [Pipeline] } 00:33:17.307 [Pipeline] // catchError 00:33:17.313 [Pipeline] } 00:33:17.332 [Pipeline] // wrap 00:33:17.338 [Pipeline] } 00:33:17.354 [Pipeline] // catchError 00:33:17.364 [Pipeline] stage 00:33:17.367 [Pipeline] { (Epilogue) 00:33:17.382 [Pipeline] catchError 00:33:17.385 [Pipeline] { 00:33:17.401 [Pipeline] echo 00:33:17.403 Cleanup processes 00:33:17.409 [Pipeline] sh 00:33:17.694 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:17.694 2443570 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:17.694 2443916 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:17.708 [Pipeline] sh 00:33:17.992 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:17.992 ++ grep -v 'sudo pgrep' 00:33:17.992 ++ awk '{print $1}' 00:33:17.992 + sudo kill -9 2443570 00:33:18.004 [Pipeline] sh 00:33:18.287 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:18.287 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:23.557 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:26.856 [Pipeline] sh 00:33:27.141 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:27.141 Artifacts sizes are good 00:33:27.160 [Pipeline] archiveArtifacts 00:33:27.205 Archiving artifacts 00:33:27.387 [Pipeline] sh 00:33:27.675 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:27.692 [Pipeline] cleanWs 00:33:27.703 [WS-CLEANUP] Deleting project workspace... 00:33:27.703 [WS-CLEANUP] Deferred wipeout is used... 00:33:27.709 [WS-CLEANUP] done 00:33:27.711 [Pipeline] } 00:33:27.733 [Pipeline] // catchError 00:33:27.744 [Pipeline] sh 00:33:28.023 + logger -p user.info -t JENKINS-CI 00:33:28.031 [Pipeline] } 00:33:28.046 [Pipeline] // stage 00:33:28.052 [Pipeline] } 00:33:28.066 [Pipeline] // node 00:33:28.071 [Pipeline] End of Pipeline 00:33:28.121 Finished: SUCCESS